Test Report: KVM_Linux_crio 18757

                    
                      76fd79497ca7607997860d279d48d970ddc3ee52:2024-04-25:34200
                    
                

Test fail (31/311)

Order failed test Duration
30 TestAddons/parallel/Ingress 153.27
32 TestAddons/parallel/MetricsServer 339.78
44 TestAddons/StoppedEnableDisable 154.32
149 TestFunctional/parallel/MountCmd/VerifyCleanup 3.01
163 TestMultiControlPlane/serial/StopSecondaryNode 142.02
165 TestMultiControlPlane/serial/RestartSecondaryNode 50.02
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 372.33
170 TestMultiControlPlane/serial/StopCluster 142.05
230 TestMultiNode/serial/RestartKeepsNodes 314.32
232 TestMultiNode/serial/StopMultiNode 141.52
239 TestPreload 351.83
247 TestKubernetesUpgrade 377.64
265 TestPause/serial/SecondStartNoReconfiguration 78.02
313 TestStartStop/group/old-k8s-version/serial/FirstStart 315.19
338 TestStartStop/group/embed-certs/serial/Stop 139.19
342 TestStartStop/group/no-preload/serial/Stop 139.2
344 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.15
345 TestStartStop/group/old-k8s-version/serial/DeployApp 0.52
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.41
347 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 101.15
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
355 TestStartStop/group/old-k8s-version/serial/SecondStart 749.29
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.42
357 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.72
358 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.78
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.57
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 425.26
361 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 350.96
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 379.51
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 91.29
x
+
TestAddons/parallel/Ingress (153.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-477322 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-477322 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-477322 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [174bca0d-e34d-4acf-8cb7-74f929b70346] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [174bca0d-e34d-4acf-8cb7-74f929b70346] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004851449s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-477322 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.116784591s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-477322 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.239
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-477322 addons disable ingress --alsologtostderr -v=1: (7.826687106s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-477322 -n addons-477322
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-477322 logs -n 25: (1.502643454s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-019320 | jenkins | v1.33.0 | 25 Apr 24 18:31 UTC |                     |
	|         | -p download-only-019320                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:32 UTC |
	| delete  | -p download-only-019320                                                                     | download-only-019320 | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:32 UTC |
	| delete  | -p download-only-587952                                                                     | download-only-587952 | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:32 UTC |
	| delete  | -p download-only-019320                                                                     | download-only-019320 | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-815806 | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC |                     |
	|         | binary-mirror-815806                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42043                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-815806                                                                     | binary-mirror-815806 | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC |                     |
	|         | addons-477322                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC |                     |
	|         | addons-477322                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-477322 --wait=true                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:35 UTC | 25 Apr 24 18:35 UTC |
	|         | -p addons-477322                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:35 UTC | 25 Apr 24 18:35 UTC |
	|         | addons-477322                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:35 UTC | 25 Apr 24 18:36 UTC |
	|         | -p addons-477322                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-477322 addons disable                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-477322 ip                                                                            | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	| addons  | addons-477322 addons disable                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-477322 ssh cat                                                                       | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | /opt/local-path-provisioner/pvc-c6aa81f4-fb5f-4681-a571-2703b02db912_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-477322 addons disable                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | addons-477322                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-477322 ssh curl -s                                                                   | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-477322 addons                                                                        | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-477322 addons                                                                        | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-477322 ip                                                                            | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:38 UTC | 25 Apr 24 18:38 UTC |
	| addons  | addons-477322 addons disable                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:38 UTC | 25 Apr 24 18:38 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-477322 addons disable                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:38 UTC | 25 Apr 24 18:38 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 18:32:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 18:32:08.876791   14407 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:32:08.876916   14407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:32:08.876925   14407 out.go:304] Setting ErrFile to fd 2...
	I0425 18:32:08.876930   14407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:32:08.877114   14407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:32:08.877755   14407 out.go:298] Setting JSON to false
	I0425 18:32:08.878614   14407 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":875,"bootTime":1714069054,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 18:32:08.878675   14407 start.go:139] virtualization: kvm guest
	I0425 18:32:08.880727   14407 out.go:177] * [addons-477322] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 18:32:08.882584   14407 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 18:32:08.883998   14407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 18:32:08.882607   14407 notify.go:220] Checking for updates...
	I0425 18:32:08.886576   14407 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:32:08.888028   14407 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:32:08.889490   14407 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 18:32:08.890830   14407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 18:32:08.892174   14407 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 18:32:08.922804   14407 out.go:177] * Using the kvm2 driver based on user configuration
	I0425 18:32:08.924096   14407 start.go:297] selected driver: kvm2
	I0425 18:32:08.924122   14407 start.go:901] validating driver "kvm2" against <nil>
	I0425 18:32:08.924135   14407 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 18:32:08.924812   14407 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:32:08.924891   14407 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 18:32:08.938794   14407 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 18:32:08.938846   14407 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 18:32:08.939031   14407 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 18:32:08.939079   14407 cni.go:84] Creating CNI manager for ""
	I0425 18:32:08.939091   14407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 18:32:08.939099   14407 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 18:32:08.939141   14407 start.go:340] cluster config:
	{Name:addons-477322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-477322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:32:08.939229   14407 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:32:08.940844   14407 out.go:177] * Starting "addons-477322" primary control-plane node in "addons-477322" cluster
	I0425 18:32:08.942146   14407 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:32:08.942182   14407 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 18:32:08.942192   14407 cache.go:56] Caching tarball of preloaded images
	I0425 18:32:08.942275   14407 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 18:32:08.942287   14407 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 18:32:08.942574   14407 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/config.json ...
	I0425 18:32:08.942593   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/config.json: {Name:mkfbbe8b32ad34fd727afe9be4baba9b3add5b51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:08.942715   14407 start.go:360] acquireMachinesLock for addons-477322: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 18:32:08.942757   14407 start.go:364] duration metric: took 29.658µs to acquireMachinesLock for "addons-477322"
	I0425 18:32:08.942773   14407 start.go:93] Provisioning new machine with config: &{Name:addons-477322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-477322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:32:08.942828   14407 start.go:125] createHost starting for "" (driver="kvm2")
	I0425 18:32:08.944458   14407 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0425 18:32:08.944599   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:32:08.944635   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:32:08.958849   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I0425 18:32:08.959520   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:32:08.960319   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:32:08.960341   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:32:08.960865   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:32:08.961094   14407 main.go:141] libmachine: (addons-477322) Calling .GetMachineName
	I0425 18:32:08.961247   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:08.961390   14407 start.go:159] libmachine.API.Create for "addons-477322" (driver="kvm2")
	I0425 18:32:08.961425   14407 client.go:168] LocalClient.Create starting
	I0425 18:32:08.961471   14407 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 18:32:09.117809   14407 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 18:32:09.209933   14407 main.go:141] libmachine: Running pre-create checks...
	I0425 18:32:09.209955   14407 main.go:141] libmachine: (addons-477322) Calling .PreCreateCheck
	I0425 18:32:09.210415   14407 main.go:141] libmachine: (addons-477322) Calling .GetConfigRaw
	I0425 18:32:09.210866   14407 main.go:141] libmachine: Creating machine...
	I0425 18:32:09.210882   14407 main.go:141] libmachine: (addons-477322) Calling .Create
	I0425 18:32:09.210996   14407 main.go:141] libmachine: (addons-477322) Creating KVM machine...
	I0425 18:32:09.212276   14407 main.go:141] libmachine: (addons-477322) DBG | found existing default KVM network
	I0425 18:32:09.212956   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:09.212840   14429 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0425 18:32:09.212986   14407 main.go:141] libmachine: (addons-477322) DBG | created network xml: 
	I0425 18:32:09.213012   14407 main.go:141] libmachine: (addons-477322) DBG | <network>
	I0425 18:32:09.213026   14407 main.go:141] libmachine: (addons-477322) DBG |   <name>mk-addons-477322</name>
	I0425 18:32:09.213037   14407 main.go:141] libmachine: (addons-477322) DBG |   <dns enable='no'/>
	I0425 18:32:09.213046   14407 main.go:141] libmachine: (addons-477322) DBG |   
	I0425 18:32:09.213061   14407 main.go:141] libmachine: (addons-477322) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0425 18:32:09.213067   14407 main.go:141] libmachine: (addons-477322) DBG |     <dhcp>
	I0425 18:32:09.213072   14407 main.go:141] libmachine: (addons-477322) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0425 18:32:09.213077   14407 main.go:141] libmachine: (addons-477322) DBG |     </dhcp>
	I0425 18:32:09.213084   14407 main.go:141] libmachine: (addons-477322) DBG |   </ip>
	I0425 18:32:09.213089   14407 main.go:141] libmachine: (addons-477322) DBG |   
	I0425 18:32:09.213095   14407 main.go:141] libmachine: (addons-477322) DBG | </network>
	I0425 18:32:09.213104   14407 main.go:141] libmachine: (addons-477322) DBG | 
	I0425 18:32:09.218454   14407 main.go:141] libmachine: (addons-477322) DBG | trying to create private KVM network mk-addons-477322 192.168.39.0/24...
	I0425 18:32:09.280699   14407 main.go:141] libmachine: (addons-477322) DBG | private KVM network mk-addons-477322 192.168.39.0/24 created
	I0425 18:32:09.280761   14407 main.go:141] libmachine: (addons-477322) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322 ...
	I0425 18:32:09.280786   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:09.280643   14429 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:32:09.280815   14407 main.go:141] libmachine: (addons-477322) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 18:32:09.280840   14407 main.go:141] libmachine: (addons-477322) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 18:32:09.527664   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:09.527510   14429 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa...
	I0425 18:32:09.671854   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:09.671728   14429 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/addons-477322.rawdisk...
	I0425 18:32:09.671880   14407 main.go:141] libmachine: (addons-477322) DBG | Writing magic tar header
	I0425 18:32:09.671890   14407 main.go:141] libmachine: (addons-477322) DBG | Writing SSH key tar header
	I0425 18:32:09.671900   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:09.671863   14429 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322 ...
	I0425 18:32:09.671977   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322
	I0425 18:32:09.672001   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322 (perms=drwx------)
	I0425 18:32:09.672011   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 18:32:09.672024   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:32:09.672034   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 18:32:09.672048   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 18:32:09.672056   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins
	I0425 18:32:09.672068   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home
	I0425 18:32:09.672078   14407 main.go:141] libmachine: (addons-477322) DBG | Skipping /home - not owner
	I0425 18:32:09.672091   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 18:32:09.672108   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 18:32:09.672122   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 18:32:09.672138   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 18:32:09.672151   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 18:32:09.672165   14407 main.go:141] libmachine: (addons-477322) Creating domain...
	I0425 18:32:09.673538   14407 main.go:141] libmachine: (addons-477322) define libvirt domain using xml: 
	I0425 18:32:09.673582   14407 main.go:141] libmachine: (addons-477322) <domain type='kvm'>
	I0425 18:32:09.673598   14407 main.go:141] libmachine: (addons-477322)   <name>addons-477322</name>
	I0425 18:32:09.673614   14407 main.go:141] libmachine: (addons-477322)   <memory unit='MiB'>4000</memory>
	I0425 18:32:09.673625   14407 main.go:141] libmachine: (addons-477322)   <vcpu>2</vcpu>
	I0425 18:32:09.673639   14407 main.go:141] libmachine: (addons-477322)   <features>
	I0425 18:32:09.673653   14407 main.go:141] libmachine: (addons-477322)     <acpi/>
	I0425 18:32:09.673666   14407 main.go:141] libmachine: (addons-477322)     <apic/>
	I0425 18:32:09.673678   14407 main.go:141] libmachine: (addons-477322)     <pae/>
	I0425 18:32:09.673688   14407 main.go:141] libmachine: (addons-477322)     
	I0425 18:32:09.673697   14407 main.go:141] libmachine: (addons-477322)   </features>
	I0425 18:32:09.673708   14407 main.go:141] libmachine: (addons-477322)   <cpu mode='host-passthrough'>
	I0425 18:32:09.673716   14407 main.go:141] libmachine: (addons-477322)   
	I0425 18:32:09.673728   14407 main.go:141] libmachine: (addons-477322)   </cpu>
	I0425 18:32:09.673740   14407 main.go:141] libmachine: (addons-477322)   <os>
	I0425 18:32:09.673753   14407 main.go:141] libmachine: (addons-477322)     <type>hvm</type>
	I0425 18:32:09.673763   14407 main.go:141] libmachine: (addons-477322)     <boot dev='cdrom'/>
	I0425 18:32:09.673771   14407 main.go:141] libmachine: (addons-477322)     <boot dev='hd'/>
	I0425 18:32:09.673784   14407 main.go:141] libmachine: (addons-477322)     <bootmenu enable='no'/>
	I0425 18:32:09.673794   14407 main.go:141] libmachine: (addons-477322)   </os>
	I0425 18:32:09.673803   14407 main.go:141] libmachine: (addons-477322)   <devices>
	I0425 18:32:09.673814   14407 main.go:141] libmachine: (addons-477322)     <disk type='file' device='cdrom'>
	I0425 18:32:09.673832   14407 main.go:141] libmachine: (addons-477322)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/boot2docker.iso'/>
	I0425 18:32:09.673847   14407 main.go:141] libmachine: (addons-477322)       <target dev='hdc' bus='scsi'/>
	I0425 18:32:09.673859   14407 main.go:141] libmachine: (addons-477322)       <readonly/>
	I0425 18:32:09.673870   14407 main.go:141] libmachine: (addons-477322)     </disk>
	I0425 18:32:09.673880   14407 main.go:141] libmachine: (addons-477322)     <disk type='file' device='disk'>
	I0425 18:32:09.673894   14407 main.go:141] libmachine: (addons-477322)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 18:32:09.673911   14407 main.go:141] libmachine: (addons-477322)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/addons-477322.rawdisk'/>
	I0425 18:32:09.673926   14407 main.go:141] libmachine: (addons-477322)       <target dev='hda' bus='virtio'/>
	I0425 18:32:09.673938   14407 main.go:141] libmachine: (addons-477322)     </disk>
	I0425 18:32:09.673948   14407 main.go:141] libmachine: (addons-477322)     <interface type='network'>
	I0425 18:32:09.673960   14407 main.go:141] libmachine: (addons-477322)       <source network='mk-addons-477322'/>
	I0425 18:32:09.673970   14407 main.go:141] libmachine: (addons-477322)       <model type='virtio'/>
	I0425 18:32:09.673978   14407 main.go:141] libmachine: (addons-477322)     </interface>
	I0425 18:32:09.673989   14407 main.go:141] libmachine: (addons-477322)     <interface type='network'>
	I0425 18:32:09.674014   14407 main.go:141] libmachine: (addons-477322)       <source network='default'/>
	I0425 18:32:09.674049   14407 main.go:141] libmachine: (addons-477322)       <model type='virtio'/>
	I0425 18:32:09.674061   14407 main.go:141] libmachine: (addons-477322)     </interface>
	I0425 18:32:09.674079   14407 main.go:141] libmachine: (addons-477322)     <serial type='pty'>
	I0425 18:32:09.674092   14407 main.go:141] libmachine: (addons-477322)       <target port='0'/>
	I0425 18:32:09.674098   14407 main.go:141] libmachine: (addons-477322)     </serial>
	I0425 18:32:09.674109   14407 main.go:141] libmachine: (addons-477322)     <console type='pty'>
	I0425 18:32:09.674184   14407 main.go:141] libmachine: (addons-477322)       <target type='serial' port='0'/>
	I0425 18:32:09.674220   14407 main.go:141] libmachine: (addons-477322)     </console>
	I0425 18:32:09.674238   14407 main.go:141] libmachine: (addons-477322)     <rng model='virtio'>
	I0425 18:32:09.674255   14407 main.go:141] libmachine: (addons-477322)       <backend model='random'>/dev/random</backend>
	I0425 18:32:09.674265   14407 main.go:141] libmachine: (addons-477322)     </rng>
	I0425 18:32:09.674276   14407 main.go:141] libmachine: (addons-477322)     
	I0425 18:32:09.674286   14407 main.go:141] libmachine: (addons-477322)     
	I0425 18:32:09.674297   14407 main.go:141] libmachine: (addons-477322)   </devices>
	I0425 18:32:09.674307   14407 main.go:141] libmachine: (addons-477322) </domain>
	I0425 18:32:09.674333   14407 main.go:141] libmachine: (addons-477322) 
	I0425 18:32:09.679799   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:77:04:7e in network default
	I0425 18:32:09.680248   14407 main.go:141] libmachine: (addons-477322) Ensuring networks are active...
	I0425 18:32:09.680274   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:09.680901   14407 main.go:141] libmachine: (addons-477322) Ensuring network default is active
	I0425 18:32:09.681216   14407 main.go:141] libmachine: (addons-477322) Ensuring network mk-addons-477322 is active
	I0425 18:32:09.681628   14407 main.go:141] libmachine: (addons-477322) Getting domain xml...
	I0425 18:32:09.682440   14407 main.go:141] libmachine: (addons-477322) Creating domain...
	I0425 18:32:10.898121   14407 main.go:141] libmachine: (addons-477322) Waiting to get IP...
	I0425 18:32:10.898847   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:10.899233   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:10.899302   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:10.899220   14429 retry.go:31] will retry after 239.217748ms: waiting for machine to come up
	I0425 18:32:11.141056   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:11.141494   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:11.141517   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:11.141450   14429 retry.go:31] will retry after 270.176347ms: waiting for machine to come up
	I0425 18:32:11.412761   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:11.413161   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:11.413186   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:11.413113   14429 retry.go:31] will retry after 415.08956ms: waiting for machine to come up
	I0425 18:32:11.829611   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:11.830033   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:11.830062   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:11.829983   14429 retry.go:31] will retry after 464.643201ms: waiting for machine to come up
	I0425 18:32:12.296753   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:12.297076   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:12.297114   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:12.297027   14429 retry.go:31] will retry after 651.866009ms: waiting for machine to come up
	I0425 18:32:12.950911   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:12.951360   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:12.951381   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:12.951318   14429 retry.go:31] will retry after 661.025369ms: waiting for machine to come up
	I0425 18:32:13.614414   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:13.614858   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:13.614882   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:13.614817   14429 retry.go:31] will retry after 888.586656ms: waiting for machine to come up
	I0425 18:32:14.504593   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:14.504996   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:14.505026   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:14.504943   14429 retry.go:31] will retry after 1.452665926s: waiting for machine to come up
	I0425 18:32:15.959193   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:15.959653   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:15.959683   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:15.959621   14429 retry.go:31] will retry after 1.255402186s: waiting for machine to come up
	I0425 18:32:17.216960   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:17.217371   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:17.217390   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:17.217356   14429 retry.go:31] will retry after 2.037520865s: waiting for machine to come up
	I0425 18:32:19.257013   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:19.257421   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:19.257449   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:19.257380   14429 retry.go:31] will retry after 2.037152484s: waiting for machine to come up
	I0425 18:32:21.297654   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:21.298244   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:21.298276   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:21.298160   14429 retry.go:31] will retry after 2.608621662s: waiting for machine to come up
	I0425 18:32:23.909824   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:23.910314   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:23.910342   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:23.910255   14429 retry.go:31] will retry after 3.706941744s: waiting for machine to come up
	I0425 18:32:27.621440   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:27.621850   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:27.621879   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:27.621818   14429 retry.go:31] will retry after 4.669046243s: waiting for machine to come up
	I0425 18:32:32.294454   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.294799   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has current primary IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.294816   14407 main.go:141] libmachine: (addons-477322) Found IP for machine: 192.168.39.239
	I0425 18:32:32.294838   14407 main.go:141] libmachine: (addons-477322) Reserving static IP address...
	I0425 18:32:32.295131   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find host DHCP lease matching {name: "addons-477322", mac: "52:54:00:d2:55:42", ip: "192.168.39.239"} in network mk-addons-477322
	I0425 18:32:32.368610   14407 main.go:141] libmachine: (addons-477322) DBG | Getting to WaitForSSH function...
	I0425 18:32:32.368642   14407 main.go:141] libmachine: (addons-477322) Reserved static IP address: 192.168.39.239
	I0425 18:32:32.368655   14407 main.go:141] libmachine: (addons-477322) Waiting for SSH to be available...
	I0425 18:32:32.371205   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.371639   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.371677   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.371751   14407 main.go:141] libmachine: (addons-477322) DBG | Using SSH client type: external
	I0425 18:32:32.371795   14407 main.go:141] libmachine: (addons-477322) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa (-rw-------)
	I0425 18:32:32.371842   14407 main.go:141] libmachine: (addons-477322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 18:32:32.371858   14407 main.go:141] libmachine: (addons-477322) DBG | About to run SSH command:
	I0425 18:32:32.371870   14407 main.go:141] libmachine: (addons-477322) DBG | exit 0
	I0425 18:32:32.506510   14407 main.go:141] libmachine: (addons-477322) DBG | SSH cmd err, output: <nil>: 
	I0425 18:32:32.506738   14407 main.go:141] libmachine: (addons-477322) KVM machine creation complete!
	I0425 18:32:32.507077   14407 main.go:141] libmachine: (addons-477322) Calling .GetConfigRaw
	I0425 18:32:32.507667   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:32.507945   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:32.508188   14407 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 18:32:32.508209   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:32:32.509461   14407 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 18:32:32.509477   14407 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 18:32:32.509484   14407 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 18:32:32.509490   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:32.511597   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.511940   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.511975   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.512082   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:32.512257   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.512402   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.512532   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:32.512647   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:32.512847   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:32.512863   14407 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 18:32:32.621804   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:32:32.621830   14407 main.go:141] libmachine: Detecting the provisioner...
	I0425 18:32:32.621838   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:32.624593   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.624922   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.624945   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.625076   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:32.625259   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.625441   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.625554   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:32.625739   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:32.625941   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:32.625957   14407 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 18:32:32.735728   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 18:32:32.735778   14407 main.go:141] libmachine: found compatible host: buildroot
	I0425 18:32:32.735785   14407 main.go:141] libmachine: Provisioning with buildroot...
	I0425 18:32:32.735792   14407 main.go:141] libmachine: (addons-477322) Calling .GetMachineName
	I0425 18:32:32.736059   14407 buildroot.go:166] provisioning hostname "addons-477322"
	I0425 18:32:32.736084   14407 main.go:141] libmachine: (addons-477322) Calling .GetMachineName
	I0425 18:32:32.736247   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:32.738736   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.739088   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.739117   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.739217   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:32.739398   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.739566   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.739707   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:32.739871   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:32.740024   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:32.740042   14407 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-477322 && echo "addons-477322" | sudo tee /etc/hostname
	I0425 18:32:32.866791   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-477322
	
	I0425 18:32:32.866826   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:32.869256   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.869620   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.869648   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.869766   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:32.869943   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.870081   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.870261   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:32.870461   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:32.870619   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:32.870634   14407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-477322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-477322/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-477322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 18:32:32.988831   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:32:32.988865   14407 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 18:32:32.988910   14407 buildroot.go:174] setting up certificates
	I0425 18:32:32.988928   14407 provision.go:84] configureAuth start
	I0425 18:32:32.988940   14407 main.go:141] libmachine: (addons-477322) Calling .GetMachineName
	I0425 18:32:32.989194   14407 main.go:141] libmachine: (addons-477322) Calling .GetIP
	I0425 18:32:32.991753   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.992075   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.992111   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.992323   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:32.994416   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.994676   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.994702   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.994844   14407 provision.go:143] copyHostCerts
	I0425 18:32:32.994901   14407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 18:32:32.995021   14407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 18:32:32.995090   14407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 18:32:32.995152   14407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.addons-477322 san=[127.0.0.1 192.168.39.239 addons-477322 localhost minikube]
	I0425 18:32:33.115468   14407 provision.go:177] copyRemoteCerts
	I0425 18:32:33.115524   14407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 18:32:33.115548   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.118254   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.118570   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.118599   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.118774   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.118943   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.119086   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.119208   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:32:33.205234   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 18:32:33.232868   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0425 18:32:33.261346   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 18:32:33.289923   14407 provision.go:87] duration metric: took 300.978659ms to configureAuth
	I0425 18:32:33.289951   14407 buildroot.go:189] setting minikube options for container-runtime
	I0425 18:32:33.290149   14407 config.go:182] Loaded profile config "addons-477322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:32:33.290270   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.292926   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.293244   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.293269   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.293541   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.293733   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.293896   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.294030   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.294183   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:33.294406   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:33.294443   14407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 18:32:33.580602   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 18:32:33.580664   14407 main.go:141] libmachine: Checking connection to Docker...
	I0425 18:32:33.580678   14407 main.go:141] libmachine: (addons-477322) Calling .GetURL
	I0425 18:32:33.581931   14407 main.go:141] libmachine: (addons-477322) DBG | Using libvirt version 6000000
	I0425 18:32:33.583813   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.584146   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.584174   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.584298   14407 main.go:141] libmachine: Docker is up and running!
	I0425 18:32:33.584317   14407 main.go:141] libmachine: Reticulating splines...
	I0425 18:32:33.584323   14407 client.go:171] duration metric: took 24.622887723s to LocalClient.Create
	I0425 18:32:33.584342   14407 start.go:167] duration metric: took 24.622953174s to libmachine.API.Create "addons-477322"
	I0425 18:32:33.584359   14407 start.go:293] postStartSetup for "addons-477322" (driver="kvm2")
	I0425 18:32:33.584371   14407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 18:32:33.584386   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:33.584619   14407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 18:32:33.584639   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.586625   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.586988   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.587016   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.587161   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.587339   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.587505   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.587639   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:32:33.674561   14407 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 18:32:33.679904   14407 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 18:32:33.679929   14407 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 18:32:33.680000   14407 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 18:32:33.680023   14407 start.go:296] duration metric: took 95.655998ms for postStartSetup
	I0425 18:32:33.680054   14407 main.go:141] libmachine: (addons-477322) Calling .GetConfigRaw
	I0425 18:32:33.680562   14407 main.go:141] libmachine: (addons-477322) Calling .GetIP
	I0425 18:32:33.683312   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.683618   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.683653   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.683858   14407 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/config.json ...
	I0425 18:32:33.684047   14407 start.go:128] duration metric: took 24.741208165s to createHost
	I0425 18:32:33.684072   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.686236   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.686509   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.686545   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.686676   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.686846   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.686997   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.687131   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.687303   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:33.687505   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:33.687521   14407 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 18:32:33.799852   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714069953.768233876
	
	I0425 18:32:33.799880   14407 fix.go:216] guest clock: 1714069953.768233876
	I0425 18:32:33.799887   14407 fix.go:229] Guest: 2024-04-25 18:32:33.768233876 +0000 UTC Remote: 2024-04-25 18:32:33.684060353 +0000 UTC m=+24.852639538 (delta=84.173523ms)
	I0425 18:32:33.799908   14407 fix.go:200] guest clock delta is within tolerance: 84.173523ms
	I0425 18:32:33.799913   14407 start.go:83] releasing machines lock for "addons-477322", held for 24.857147086s
	I0425 18:32:33.799932   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:33.800179   14407 main.go:141] libmachine: (addons-477322) Calling .GetIP
	I0425 18:32:33.802972   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.803469   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.803503   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.803645   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:33.804228   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:33.804401   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:33.804506   14407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 18:32:33.804550   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.804714   14407 ssh_runner.go:195] Run: cat /version.json
	I0425 18:32:33.804741   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.807620   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.807651   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.807972   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.807994   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.808033   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.808057   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.808170   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.808325   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.808404   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.808476   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.808537   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.808597   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.808705   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:32:33.808766   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:32:33.917578   14407 ssh_runner.go:195] Run: systemctl --version
	I0425 18:32:33.924711   14407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 18:32:34.093470   14407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 18:32:34.100098   14407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 18:32:34.100158   14407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 18:32:34.120554   14407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 18:32:34.120613   14407 start.go:494] detecting cgroup driver to use...
	I0425 18:32:34.120673   14407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 18:32:34.139252   14407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 18:32:34.156176   14407 docker.go:217] disabling cri-docker service (if available) ...
	I0425 18:32:34.156229   14407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 18:32:34.172074   14407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 18:32:34.188818   14407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 18:32:34.321077   14407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 18:32:34.465908   14407 docker.go:233] disabling docker service ...
	I0425 18:32:34.465979   14407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 18:32:34.482982   14407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 18:32:34.497440   14407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 18:32:34.631854   14407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 18:32:34.780095   14407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 18:32:34.796352   14407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 18:32:34.818121   14407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 18:32:34.818216   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.831309   14407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 18:32:34.831388   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.844734   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.857818   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.871032   14407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 18:32:34.884226   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.897118   14407 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.920153   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.935413   14407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 18:32:34.949466   14407 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 18:32:34.949523   14407 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 18:32:34.968446   14407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 18:32:34.982842   14407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:32:35.129070   14407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 18:32:35.289579   14407 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 18:32:35.289707   14407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 18:32:35.295185   14407 start.go:562] Will wait 60s for crictl version
	I0425 18:32:35.295261   14407 ssh_runner.go:195] Run: which crictl
	I0425 18:32:35.299565   14407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 18:32:35.341431   14407 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 18:32:35.341570   14407 ssh_runner.go:195] Run: crio --version
	I0425 18:32:35.376321   14407 ssh_runner.go:195] Run: crio --version
	I0425 18:32:35.409404   14407 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 18:32:35.410955   14407 main.go:141] libmachine: (addons-477322) Calling .GetIP
	I0425 18:32:35.413805   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:35.414177   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:35.414237   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:35.414445   14407 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 18:32:35.419492   14407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:32:35.435405   14407 kubeadm.go:877] updating cluster {Name:addons-477322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-477322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 18:32:35.435507   14407 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:32:35.435548   14407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 18:32:35.472112   14407 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 18:32:35.472171   14407 ssh_runner.go:195] Run: which lz4
	I0425 18:32:35.476932   14407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 18:32:35.481833   14407 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 18:32:35.481871   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 18:32:37.093365   14407 crio.go:462] duration metric: took 1.616455772s to copy over tarball
	I0425 18:32:37.093432   14407 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 18:32:39.682702   14407 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.589245174s)
	I0425 18:32:39.682732   14407 crio.go:469] duration metric: took 2.589338983s to extract the tarball
	I0425 18:32:39.682741   14407 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 18:32:39.722944   14407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 18:32:39.774424   14407 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 18:32:39.774454   14407 cache_images.go:84] Images are preloaded, skipping loading
	I0425 18:32:39.774464   14407 kubeadm.go:928] updating node { 192.168.39.239 8443 v1.30.0 crio true true} ...
	I0425 18:32:39.774604   14407 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-477322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-477322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 18:32:39.774697   14407 ssh_runner.go:195] Run: crio config
	I0425 18:32:39.827319   14407 cni.go:84] Creating CNI manager for ""
	I0425 18:32:39.827351   14407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 18:32:39.827365   14407 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 18:32:39.827386   14407 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-477322 NodeName:addons-477322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 18:32:39.827564   14407 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-477322"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 18:32:39.827622   14407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 18:32:39.839343   14407 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 18:32:39.839406   14407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 18:32:39.850676   14407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0425 18:32:39.869798   14407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 18:32:39.889261   14407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0425 18:32:39.908921   14407 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I0425 18:32:39.913508   14407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:32:39.928631   14407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:32:40.062192   14407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:32:40.081068   14407 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322 for IP: 192.168.39.239
	I0425 18:32:40.081097   14407 certs.go:194] generating shared ca certs ...
	I0425 18:32:40.081119   14407 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.081284   14407 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 18:32:40.209056   14407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt ...
	I0425 18:32:40.209093   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt: {Name:mk3887859f354ed896fbae7c34bd1bc1db634b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.209270   14407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key ...
	I0425 18:32:40.209281   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key: {Name:mk71370329172ea9afcee9545022ae144932d1fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.209348   14407 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 18:32:40.308956   14407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt ...
	I0425 18:32:40.308984   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt: {Name:mkee8afd19c42bdc2e5f359d8aa6358fc627dcf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.309127   14407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key ...
	I0425 18:32:40.309138   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key: {Name:mke29a388a15f2bd08a1ab201764d3be8a3cef3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.309206   14407 certs.go:256] generating profile certs ...
	I0425 18:32:40.309263   14407 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.key
	I0425 18:32:40.309281   14407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt with IP's: []
	I0425 18:32:40.526590   14407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt ...
	I0425 18:32:40.526618   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: {Name:mkc0d2285ce92926517408da9b07c1b07342b6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.526769   14407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.key ...
	I0425 18:32:40.526779   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.key: {Name:mkf4f9d0102869b03358296e519a19d8577237bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.526843   14407 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key.561cdee7
	I0425 18:32:40.526859   14407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt.561cdee7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.239]
	I0425 18:32:40.675176   14407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt.561cdee7 ...
	I0425 18:32:40.675215   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt.561cdee7: {Name:mk7c67658c25dbae2b93ea93af92c48b425280c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.675377   14407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key.561cdee7 ...
	I0425 18:32:40.675390   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key.561cdee7: {Name:mkaa0d20d78bbf1d529d2d6afabe7b4b38456c4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.675458   14407 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt.561cdee7 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt
	I0425 18:32:40.675555   14407 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key.561cdee7 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key
	I0425 18:32:40.675603   14407 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.key
	I0425 18:32:40.675621   14407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.crt with IP's: []
	I0425 18:32:40.747246   14407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.crt ...
	I0425 18:32:40.747273   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.crt: {Name:mk022faf332c3fc64969534ed737054decdc5298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.747420   14407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.key ...
	I0425 18:32:40.747430   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.key: {Name:mk7bfbfe15f0850685a4c3880da12e0453dd03f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.747597   14407 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 18:32:40.747631   14407 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 18:32:40.747659   14407 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 18:32:40.747688   14407 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 18:32:40.748269   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 18:32:40.806470   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 18:32:40.843230   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 18:32:40.874273   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 18:32:40.901449   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0425 18:32:40.928360   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 18:32:40.956482   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 18:32:40.983212   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 18:32:41.009753   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 18:32:41.036763   14407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 18:32:41.055755   14407 ssh_runner.go:195] Run: openssl version
	I0425 18:32:41.062007   14407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 18:32:41.075276   14407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:32:41.080158   14407 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:32:41.080212   14407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:32:41.086157   14407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 18:32:41.098669   14407 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 18:32:41.103292   14407 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 18:32:41.103338   14407 kubeadm.go:391] StartCluster: {Name:addons-477322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-477322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:32:41.103404   14407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 18:32:41.103441   14407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 18:32:41.142243   14407 cri.go:89] found id: ""
	I0425 18:32:41.142323   14407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0425 18:32:41.153898   14407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 18:32:41.164693   14407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 18:32:41.175946   14407 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 18:32:41.175966   14407 kubeadm.go:156] found existing configuration files:
	
	I0425 18:32:41.176005   14407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 18:32:41.186659   14407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 18:32:41.186712   14407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 18:32:41.197809   14407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 18:32:41.208081   14407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 18:32:41.208155   14407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 18:32:41.219143   14407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 18:32:41.229854   14407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 18:32:41.229910   14407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 18:32:41.241178   14407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 18:32:41.252115   14407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 18:32:41.252180   14407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 18:32:41.264070   14407 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 18:32:41.321297   14407 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 18:32:41.321372   14407 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 18:32:41.443720   14407 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 18:32:41.443857   14407 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 18:32:41.443962   14407 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 18:32:41.661182   14407 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 18:32:41.823901   14407 out.go:204]   - Generating certificates and keys ...
	I0425 18:32:41.824034   14407 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 18:32:41.824116   14407 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 18:32:42.144930   14407 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0425 18:32:42.316672   14407 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0425 18:32:42.467008   14407 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0425 18:32:42.724106   14407 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0425 18:32:42.913238   14407 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0425 18:32:42.913370   14407 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-477322 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0425 18:32:43.157029   14407 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0425 18:32:43.157201   14407 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-477322 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0425 18:32:43.351070   14407 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0425 18:32:43.565848   14407 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0425 18:32:43.869010   14407 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0425 18:32:43.869215   14407 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 18:32:44.088470   14407 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 18:32:44.468597   14407 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 18:32:44.644737   14407 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 18:32:45.018229   14407 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 18:32:45.141755   14407 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 18:32:45.142511   14407 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 18:32:45.144765   14407 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 18:32:45.147714   14407 out.go:204]   - Booting up control plane ...
	I0425 18:32:45.147854   14407 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 18:32:45.148842   14407 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 18:32:45.149620   14407 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 18:32:45.165670   14407 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 18:32:45.166136   14407 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 18:32:45.166198   14407 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 18:32:45.293502   14407 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 18:32:45.293624   14407 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 18:32:45.794844   14407 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.733075ms
	I0425 18:32:45.794919   14407 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 18:32:50.793605   14407 kubeadm.go:309] [api-check] The API server is healthy after 5.001976088s
	I0425 18:32:50.807832   14407 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 18:32:50.827916   14407 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 18:32:50.870695   14407 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 18:32:50.870886   14407 kubeadm.go:309] [mark-control-plane] Marking the node addons-477322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 18:32:50.886593   14407 kubeadm.go:309] [bootstrap-token] Using token: ys83sc.bekjayuufeldo30f
	I0425 18:32:50.888127   14407 out.go:204]   - Configuring RBAC rules ...
	I0425 18:32:50.888273   14407 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 18:32:50.897550   14407 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 18:32:50.909013   14407 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 18:32:50.915452   14407 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 18:32:50.919016   14407 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 18:32:50.922502   14407 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 18:32:51.199899   14407 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 18:32:51.637121   14407 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 18:32:52.199156   14407 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 18:32:52.200101   14407 kubeadm.go:309] 
	I0425 18:32:52.200181   14407 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 18:32:52.200192   14407 kubeadm.go:309] 
	I0425 18:32:52.200269   14407 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 18:32:52.200278   14407 kubeadm.go:309] 
	I0425 18:32:52.200321   14407 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 18:32:52.200405   14407 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 18:32:52.200478   14407 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 18:32:52.200488   14407 kubeadm.go:309] 
	I0425 18:32:52.200561   14407 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 18:32:52.200570   14407 kubeadm.go:309] 
	I0425 18:32:52.200637   14407 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 18:32:52.200647   14407 kubeadm.go:309] 
	I0425 18:32:52.200728   14407 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 18:32:52.200828   14407 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 18:32:52.200914   14407 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 18:32:52.200930   14407 kubeadm.go:309] 
	I0425 18:32:52.201003   14407 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 18:32:52.201068   14407 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 18:32:52.201075   14407 kubeadm.go:309] 
	I0425 18:32:52.201151   14407 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ys83sc.bekjayuufeldo30f \
	I0425 18:32:52.201253   14407 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed \
	I0425 18:32:52.201280   14407 kubeadm.go:309] 	--control-plane 
	I0425 18:32:52.201287   14407 kubeadm.go:309] 
	I0425 18:32:52.201380   14407 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 18:32:52.201412   14407 kubeadm.go:309] 
	I0425 18:32:52.201506   14407 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ys83sc.bekjayuufeldo30f \
	I0425 18:32:52.201657   14407 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed 
	I0425 18:32:52.202324   14407 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 18:32:52.202392   14407 cni.go:84] Creating CNI manager for ""
	I0425 18:32:52.202410   14407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 18:32:52.204979   14407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 18:32:52.206337   14407 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 18:32:52.218404   14407 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 18:32:52.243570   14407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 18:32:52.243681   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:52.243709   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-477322 minikube.k8s.io/updated_at=2024_04_25T18_32_52_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=addons-477322 minikube.k8s.io/primary=true
	I0425 18:32:52.300375   14407 ops.go:34] apiserver oom_adj: -16
	I0425 18:32:52.416994   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:52.917852   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:53.417204   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:53.917461   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:54.418006   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:54.917274   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:55.417275   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:55.918058   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:56.417114   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:56.917477   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:57.417066   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:57.917439   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:58.417199   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:58.918026   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:59.417113   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:59.917781   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:00.417749   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:00.917011   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:01.417152   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:01.917596   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:02.417553   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:02.917479   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:03.417472   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:03.918018   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:04.417534   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:04.917311   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:05.418092   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:05.942899   14407 kubeadm.go:1107] duration metric: took 13.699285685s to wait for elevateKubeSystemPrivileges
	W0425 18:33:05.942944   14407 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 18:33:05.942955   14407 kubeadm.go:393] duration metric: took 24.839620054s to StartCluster
	I0425 18:33:05.942977   14407 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:33:05.943172   14407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:33:05.943654   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:33:05.943960   14407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0425 18:33:05.944012   14407 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:33:05.945947   14407 out.go:177] * Verifying Kubernetes components...
	I0425 18:33:05.944130   14407 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0425 18:33:05.944225   14407 config.go:182] Loaded profile config "addons-477322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:33:05.947740   14407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:33:05.947752   14407 addons.go:69] Setting yakd=true in profile "addons-477322"
	I0425 18:33:05.947785   14407 addons.go:234] Setting addon yakd=true in "addons-477322"
	I0425 18:33:05.947807   14407 addons.go:69] Setting ingress-dns=true in profile "addons-477322"
	I0425 18:33:05.947822   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.947831   14407 addons.go:234] Setting addon ingress-dns=true in "addons-477322"
	I0425 18:33:05.947843   14407 addons.go:69] Setting registry=true in profile "addons-477322"
	I0425 18:33:05.947861   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.947867   14407 addons.go:69] Setting metrics-server=true in profile "addons-477322"
	I0425 18:33:05.947873   14407 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-477322"
	I0425 18:33:05.947883   14407 addons.go:69] Setting cloud-spanner=true in profile "addons-477322"
	I0425 18:33:05.947891   14407 addons.go:69] Setting default-storageclass=true in profile "addons-477322"
	I0425 18:33:05.947905   14407 addons.go:234] Setting addon cloud-spanner=true in "addons-477322"
	I0425 18:33:05.947908   14407 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-477322"
	I0425 18:33:05.947918   14407 addons.go:234] Setting addon metrics-server=true in "addons-477322"
	I0425 18:33:05.947922   14407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-477322"
	I0425 18:33:05.947933   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.947938   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.947951   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.947955   14407 addons.go:69] Setting helm-tiller=true in profile "addons-477322"
	I0425 18:33:05.947974   14407 addons.go:234] Setting addon helm-tiller=true in "addons-477322"
	I0425 18:33:05.947990   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.948262   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948282   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948298   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948313   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948315   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948326   14407 addons.go:69] Setting inspektor-gadget=true in profile "addons-477322"
	I0425 18:33:05.948331   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948331   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948339   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948349   14407 addons.go:234] Setting addon inspektor-gadget=true in "addons-477322"
	I0425 18:33:05.948352   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948367   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948374   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.948371   14407 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-477322"
	I0425 18:33:05.948316   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948398   14407 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-477322"
	I0425 18:33:05.948417   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.947950   14407 addons.go:69] Setting gcp-auth=true in profile "addons-477322"
	I0425 18:33:05.948439   14407 mustload.go:65] Loading cluster: addons-477322
	I0425 18:33:05.948459   14407 addons.go:69] Setting ingress=true in profile "addons-477322"
	I0425 18:33:05.948482   14407 addons.go:234] Setting addon ingress=true in "addons-477322"
	I0425 18:33:05.948523   14407 addons.go:69] Setting volumesnapshots=true in profile "addons-477322"
	I0425 18:33:05.948547   14407 addons.go:234] Setting addon volumesnapshots=true in "addons-477322"
	I0425 18:33:05.948573   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.948594   14407 config.go:182] Loaded profile config "addons-477322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:33:05.948668   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948689   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.947875   14407 addons.go:234] Setting addon registry=true in "addons-477322"
	I0425 18:33:05.948774   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948792   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.948850   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948898   14407 addons.go:69] Setting storage-provisioner=true in profile "addons-477322"
	I0425 18:33:05.948919   14407 addons.go:234] Setting addon storage-provisioner=true in "addons-477322"
	I0425 18:33:05.948930   14407 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-477322"
	I0425 18:33:05.948941   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948944   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948968   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948969   14407 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-477322"
	I0425 18:33:05.948983   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.949050   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.949117   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.949144   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.949257   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.949329   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.949715   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.949927   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.950297   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.950347   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.950419   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.950445   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.969960   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0425 18:33:05.970191   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0425 18:33:05.970508   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.970600   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.971079   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.971080   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.971124   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.971108   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.971470   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.971506   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.972055   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.972078   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.972055   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.972127   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.972470   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42837
	I0425 18:33:05.982610   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.982663   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.982917   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42579
	I0425 18:33:05.983012   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I0425 18:33:05.983075   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43543
	I0425 18:33:05.983252   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.983993   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.984012   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.984078   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.984534   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.984602   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.984670   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.985089   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.985129   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.991231   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.991254   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.991409   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.991423   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.991535   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.991545   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.992476   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.992488   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.992538   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.992648   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45537
	I0425 18:33:05.993141   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:05.993205   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.993213   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:05.993241   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.993855   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.994435   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.994453   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.994834   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.995046   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:05.997336   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.997728   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.997749   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.999439   14407 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-477322"
	I0425 18:33:05.999480   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.999497   14407 addons.go:234] Setting addon default-storageclass=true in "addons-477322"
	I0425 18:33:05.999531   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.999819   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.999837   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.999915   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.999939   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.001554   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I0425 18:33:06.002084   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.002634   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.002657   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.003013   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.003541   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.003580   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.008778   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37749
	I0425 18:33:06.009470   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.009573   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37813
	I0425 18:33:06.010052   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.010076   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.010135   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.010572   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.010597   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.010911   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.011222   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.011459   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.011480   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.011733   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.011762   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.012518   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43295
	I0425 18:33:06.017103   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I0425 18:33:06.017390   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.018427   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.019002   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.019020   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.019381   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.019906   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.019944   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.021119   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.021137   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.021485   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.022052   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.022089   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.028780   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42485
	I0425 18:33:06.029372   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.030018   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.030037   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.032523   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I0425 18:33:06.032908   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.033494   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.033509   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.034116   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.034362   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.034967   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.035503   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.035540   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.036381   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.038648   14407 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0425 18:33:06.040889   14407 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0425 18:33:06.040907   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0425 18:33:06.040929   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.038730   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44209
	I0425 18:33:06.037297   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I0425 18:33:06.041253   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41697
	I0425 18:33:06.042234   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.042586   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.043032   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.043047   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.043512   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.043633   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.043938   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.043954   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.044407   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.044439   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.044658   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.045163   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.045187   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.045373   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.045525   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.045670   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.045805   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.046170   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.046183   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.046298   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.046835   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.047122   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.048936   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.049177   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.050745   14407 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0425 18:33:06.052339   14407 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 18:33:06.052357   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 18:33:06.052378   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.055538   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.056154   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.056184   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.056386   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.056546   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.056678   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.056791   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.057063   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45241
	I0425 18:33:06.057677   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.057764   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I0425 18:33:06.061243   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.061271   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.061335   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I0425 18:33:06.062069   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.062161   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0425 18:33:06.063316   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.063334   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.063507   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.063648   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.063700   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.063732   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.064272   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.064347   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.064362   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.064373   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0425 18:33:06.064350   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I0425 18:33:06.064831   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.064883   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.065103   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.065248   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.065260   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.065617   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.066008   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.066075   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.066147   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.066182   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.067711   14407 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0425 18:33:06.066734   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.067025   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.067314   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.067856   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.070638   14407 out.go:177]   - Using image docker.io/busybox:stable
	I0425 18:33:06.069101   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.070125   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.071922   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
	I0425 18:33:06.071988   14407 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0425 18:33:06.072416   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.073224   14407 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0425 18:33:06.074799   14407 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0425 18:33:06.074817   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0425 18:33:06.074833   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.072498   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37565
	I0425 18:33:06.072566   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0425 18:33:06.073189   14407 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0425 18:33:06.076678   14407 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0425 18:33:06.076692   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0425 18:33:06.076707   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.075525   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0425 18:33:06.073295   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.073604   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.073817   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.076912   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.073278   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0425 18:33:06.077892   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.075628   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.076450   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.078417   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40705
	I0425 18:33:06.078595   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.078609   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.078610   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.078624   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.078686   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.078921   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.078997   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.079045   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.079080   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.079325   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.079338   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.079383   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.079774   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.079826   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.079848   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.079861   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.079940   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.079951   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.080184   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.080421   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.080965   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.080998   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.081176   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.081317   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.081387   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.081427   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.081539   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.082980   14407 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0425 18:33:06.084314   14407 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0425 18:33:06.084331   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0425 18:33:06.084348   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.083066   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.082355   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.081978   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.084422   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.083385   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.083890   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.084011   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.085144   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.086503   14407 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 18:33:06.087812   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.087884   14407 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 18:33:06.087896   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 18:33:06.087908   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.086513   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.089297   14407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0425 18:33:06.085379   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.087130   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.087944   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.088199   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.088287   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.088528   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.090561   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0425 18:33:06.090684   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0425 18:33:06.090708   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.090783   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.090855   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.091308   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.091761   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0425 18:33:06.091786   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.091797   14407 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0425 18:33:06.091853   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.091896   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.092057   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.092522   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.092576   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.093031   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.094279   14407 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0425 18:33:06.094292   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.094300   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0425 18:33:06.094312   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.093179   14407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0425 18:33:06.093241   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.093998   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.093998   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.094018   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.094019   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.094079   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.094728   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.096872   14407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0425 18:33:06.095640   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.095672   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0425 18:33:06.095681   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.095720   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.095869   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.095899   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.095918   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.095975   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.096671   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.097518   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.098108   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0425 18:33:06.098129   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.098169   14407 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0425 18:33:06.098180   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0425 18:33:06.098190   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.098216   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.098292   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.098313   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.098439   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.098551   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.098741   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.098888   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.099007   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.099308   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.099526   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.099922   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.101917   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.102269   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.103931   14407 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0425 18:33:06.102585   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.102758   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.102893   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.103077   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.103637   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.105170   14407 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0425 18:33:06.105180   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0425 18:33:06.105190   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.105216   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.105270   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.105284   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.105302   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.105339   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.106773   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0425 18:33:06.105793   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.105802   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.105912   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I0425 18:33:06.107812   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.109207   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0425 18:33:06.108097   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.108213   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.108210   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.108363   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.108459   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.110393   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.111695   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0425 18:33:06.110696   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.110790   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.112741   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.113797   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0425 18:33:06.112926   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.113043   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.114978   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0425 18:33:06.116249   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0425 18:33:06.115155   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.115199   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.118597   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	W0425 18:33:06.118267   14407 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41796->192.168.39.239:22: read: connection reset by peer
	I0425 18:33:06.119908   14407 retry.go:31] will retry after 158.527244ms: ssh: handshake failed: read tcp 192.168.39.1:41796->192.168.39.239:22: read: connection reset by peer
	I0425 18:33:06.119656   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.119877   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0425 18:33:06.121340   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0425 18:33:06.120162   14407 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 18:33:06.121361   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0425 18:33:06.121365   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 18:33:06.121384   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.121386   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.121265   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35229
	I0425 18:33:06.121825   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.122983   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.123005   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.123469   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.123739   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.125125   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.125401   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.125465   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.125715   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.125734   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.127103   14407 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0425 18:33:06.125893   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.126025   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.126029   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.128348   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.129594   14407 out.go:177]   - Using image docker.io/registry:2.8.3
	I0425 18:33:06.128548   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.128562   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.130867   14407 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0425 18:33:06.130884   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0425 18:33:06.130895   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.130964   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.131054   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.131102   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.131220   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.133835   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.134192   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.134239   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.134377   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.134539   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.134686   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.134802   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.556088   14407 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 18:33:06.556115   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0425 18:33:06.606644   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0425 18:33:06.651816   14407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:33:06.652191   14407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0425 18:33:06.678222   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0425 18:33:06.681615   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0425 18:33:06.690253   14407 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0425 18:33:06.690272   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0425 18:33:06.696050   14407 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 18:33:06.696067   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 18:33:06.747250   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0425 18:33:06.747272   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0425 18:33:06.749360   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 18:33:06.750651   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0425 18:33:06.762924   14407 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0425 18:33:06.762942   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0425 18:33:06.771911   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0425 18:33:06.797469   14407 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0425 18:33:06.797490   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0425 18:33:06.798726   14407 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0425 18:33:06.798742   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0425 18:33:06.808296   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 18:33:06.896731   14407 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0425 18:33:06.896760   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0425 18:33:06.923799   14407 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0425 18:33:06.923829   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0425 18:33:06.931916   14407 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 18:33:06.931933   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 18:33:06.989070   14407 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0425 18:33:06.989093   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0425 18:33:07.046858   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0425 18:33:07.046879   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0425 18:33:07.057601   14407 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0425 18:33:07.057619   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0425 18:33:07.140734   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 18:33:07.171248   14407 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0425 18:33:07.171265   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0425 18:33:07.188241   14407 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0425 18:33:07.188258   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0425 18:33:07.225678   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0425 18:33:07.232040   14407 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0425 18:33:07.232057   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0425 18:33:07.246141   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0425 18:33:07.246160   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0425 18:33:07.341420   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0425 18:33:07.406952   14407 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0425 18:33:07.406984   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0425 18:33:07.427563   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0425 18:33:07.427586   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0425 18:33:07.434125   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0425 18:33:07.434145   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0425 18:33:07.506614   14407 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0425 18:33:07.506642   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0425 18:33:07.668075   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0425 18:33:07.668103   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0425 18:33:07.711218   14407 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0425 18:33:07.711237   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0425 18:33:07.763120   14407 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0425 18:33:07.763142   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0425 18:33:07.846931   14407 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0425 18:33:07.846952   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0425 18:33:08.064424   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0425 18:33:08.064445   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0425 18:33:08.075370   14407 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0425 18:33:08.075387   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0425 18:33:08.300653   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0425 18:33:08.427844   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0425 18:33:08.449781   14407 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0425 18:33:08.449811   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0425 18:33:08.470617   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0425 18:33:08.470639   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0425 18:33:08.724521   14407 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0425 18:33:08.724546   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0425 18:33:08.817882   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0425 18:33:08.817902   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0425 18:33:08.998500   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0425 18:33:09.322248   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0425 18:33:09.322267   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0425 18:33:09.750536   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0425 18:33:09.750560   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0425 18:33:10.450037   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0425 18:33:11.994218   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.3875271s)
	I0425 18:33:11.994252   14407 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.342404193s)
	I0425 18:33:11.994280   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:11.994291   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:11.994286   14407 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.342069888s)
	I0425 18:33:11.994311   14407 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0425 18:33:11.994358   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.316107074s)
	I0425 18:33:11.994398   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:11.994411   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:11.994554   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:11.994604   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:11.994621   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:11.994624   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:11.994682   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:11.994687   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:11.994732   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:11.994744   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:11.994751   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:11.994761   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:11.994896   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:11.994913   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:11.995144   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:11.995155   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:11.995194   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:12.019068   14407 node_ready.go:35] waiting up to 6m0s for node "addons-477322" to be "Ready" ...
	I0425 18:33:12.130469   14407 node_ready.go:49] node "addons-477322" has status "Ready":"True"
	I0425 18:33:12.130501   14407 node_ready.go:38] duration metric: took 111.404224ms for node "addons-477322" to be "Ready" ...
	I0425 18:33:12.130514   14407 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:33:12.149903   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:12.149930   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:12.150265   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:12.150333   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:12.150350   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:12.245756   14407 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6wpfr" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.430039   14407 pod_ready.go:92] pod "coredns-7db6d8ff4d-6wpfr" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.430061   14407 pod_ready.go:81] duration metric: took 184.280371ms for pod "coredns-7db6d8ff4d-6wpfr" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.430071   14407 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w9mgq" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.535521   14407 pod_ready.go:92] pod "coredns-7db6d8ff4d-w9mgq" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.535553   14407 pod_ready.go:81] duration metric: took 105.475613ms for pod "coredns-7db6d8ff4d-w9mgq" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.535567   14407 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.556162   14407 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-477322" context rescaled to 1 replicas
	I0425 18:33:12.591846   14407 pod_ready.go:92] pod "etcd-addons-477322" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.591870   14407 pod_ready.go:81] duration metric: took 56.29632ms for pod "etcd-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.591879   14407 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.670472   14407 pod_ready.go:92] pod "kube-apiserver-addons-477322" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.670502   14407 pod_ready.go:81] duration metric: took 78.615552ms for pod "kube-apiserver-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.670515   14407 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.768097   14407 pod_ready.go:92] pod "kube-controller-manager-addons-477322" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.768120   14407 pod_ready.go:81] duration metric: took 97.597567ms for pod "kube-controller-manager-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.768131   14407 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rgvqp" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.864203   14407 pod_ready.go:92] pod "kube-proxy-rgvqp" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.864233   14407 pod_ready.go:81] duration metric: took 96.09537ms for pod "kube-proxy-rgvqp" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.864247   14407 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:13.087596   14407 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0425 18:33:13.087640   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:13.090723   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:13.091163   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:13.091191   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:13.091424   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:13.091641   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:13.091833   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:13.091970   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:13.244598   14407 pod_ready.go:92] pod "kube-scheduler-addons-477322" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:13.244622   14407 pod_ready.go:81] duration metric: took 380.367298ms for pod "kube-scheduler-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:13.244632   14407 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:13.568658   14407 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0425 18:33:13.660837   14407 addons.go:234] Setting addon gcp-auth=true in "addons-477322"
	I0425 18:33:13.660896   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:13.661172   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:13.661197   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:13.676994   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I0425 18:33:13.677538   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:13.678062   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:13.678087   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:13.678465   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:13.678910   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:13.678938   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:13.695280   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34497
	I0425 18:33:13.695819   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:13.696324   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:13.696351   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:13.696608   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:13.696776   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:13.698233   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:13.698432   14407 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0425 18:33:13.698450   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:13.700904   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:13.701288   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:13.701315   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:13.701430   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:13.701582   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:13.701751   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:13.701908   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:15.260636   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:15.793560   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.044172301s)
	I0425 18:33:15.793622   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793634   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.793638   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.042962118s)
	I0425 18:33:15.793681   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793679   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.112020981s)
	I0425 18:33:15.793691   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.793710   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793748   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.021810986s)
	I0425 18:33:15.793817   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793763   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.793860   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.653100429s)
	I0425 18:33:15.793879   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.793884   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793895   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.793781   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.985463529s)
	I0425 18:33:15.793933   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793946   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.793963   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794368   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.794003   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.794028   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.794407   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.794412   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794417   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794422   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794425   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794430   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794433   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794064   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.794422   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.795889724s)
	I0425 18:33:15.794472   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794482   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794088   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.794491   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794496   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794504   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794511   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794483   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794523   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794119   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.568411965s)
	I0425 18:33:15.794123   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.794546   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794553   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794253   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.794579   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794587   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794594   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.795266   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.795289   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.795308   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.795337   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.795345   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.795355   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.795364   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.795424   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.795432   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.795440   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.795451   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.795505   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.795528   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.795535   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.795754   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.795765   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.795776   14407 addons.go:470] Verifying addon metrics-server=true in "addons-477322"
	I0425 18:33:15.795832   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.795867   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.795876   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794077   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.794330   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.493586195s)
	W0425 18:33:15.796340   14407 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0425 18:33:15.796360   14407 retry.go:31] will retry after 294.271271ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0425 18:33:15.796396   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.796422   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.796429   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.796479   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.796486   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.796642   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.796662   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.796668   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.796675   14407 addons.go:470] Verifying addon ingress=true in "addons-477322"
	I0425 18:33:15.794162   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.452704683s)
	I0425 18:33:15.794330   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.366446473s)
	I0425 18:33:15.797236   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.797255   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.797385   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.798573   14407 out.go:177] * Verifying ingress addon...
	I0425 18:33:15.798629   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.800495   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.798639   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.800539   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.800552   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.798642   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.800587   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.798653   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.800560   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.800822   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.800842   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.800855   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.800858   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.800864   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.800872   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.800888   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.800897   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.800980   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.800993   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.801002   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.801009   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.801328   14407 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0425 18:33:15.802100   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.802112   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.802105   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.802126   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.802107   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.802155   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.802163   14407 addons.go:470] Verifying addon registry=true in "addons-477322"
	I0425 18:33:15.804238   14407 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-477322 service yakd-dashboard -n yakd-dashboard
	
	I0425 18:33:15.805569   14407 out.go:177] * Verifying registry addon...
	I0425 18:33:15.807529   14407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0425 18:33:15.817239   14407 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0425 18:33:15.817254   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:15.838190   14407 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0425 18:33:15.838223   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:15.859853   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.859877   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.860120   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.860165   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.860175   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:16.091338   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0425 18:33:16.306582   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:16.341078   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:16.828513   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:16.828858   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:17.309805   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:17.315934   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:17.779252   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:17.834251   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:17.843985   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.393894349s)
	I0425 18:33:17.844012   14407 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.145558273s)
	I0425 18:33:17.844046   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:17.844061   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:17.845855   14407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0425 18:33:17.844390   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:17.844430   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:17.847356   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:17.847368   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:17.847374   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:17.848824   14407 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0425 18:33:17.847696   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:17.847725   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:17.850145   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:17.850166   14407 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-477322"
	I0425 18:33:17.850182   14407 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0425 18:33:17.850199   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0425 18:33:17.851547   14407 out.go:177] * Verifying csi-hostpath-driver addon...
	I0425 18:33:17.853958   14407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0425 18:33:17.870815   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:17.878910   14407 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0425 18:33:17.878945   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:18.065904   14407 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0425 18:33:18.065923   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0425 18:33:18.194029   14407 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0425 18:33:18.194049   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0425 18:33:18.297738   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0425 18:33:18.307261   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:18.315947   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:18.361108   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:18.805482   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:18.811931   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:18.860849   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:19.058797   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.967403897s)
	I0425 18:33:19.058850   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:19.058871   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:19.059145   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:19.059230   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:19.059247   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:19.059255   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:19.059199   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:19.059542   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:19.059562   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:19.059571   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:19.306296   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:19.311815   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:19.359810   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:19.786115   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:19.854701   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.556932672s)
	I0425 18:33:19.854746   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:19.854760   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:19.855028   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:19.855050   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:19.855053   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:19.855066   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:19.855083   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:19.855398   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:19.855416   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:19.855462   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:19.857132   14407 addons.go:470] Verifying addon gcp-auth=true in "addons-477322"
	I0425 18:33:19.858799   14407 out.go:177] * Verifying gcp-auth addon...
	I0425 18:33:19.861172   14407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0425 18:33:19.864370   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:19.865040   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:19.913020   14407 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0425 18:33:19.913044   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:19.913651   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:20.306884   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:20.314076   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:20.364700   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:20.367044   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:20.806573   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:20.813167   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:20.868369   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:20.870751   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:21.305995   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:21.312085   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:21.359803   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:21.369098   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:21.808020   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:21.812439   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:21.861762   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:21.865632   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:22.251413   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:22.306544   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:22.311444   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:22.363626   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:22.365260   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:22.806131   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:22.812629   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:22.860071   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:22.865097   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:23.306736   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:23.312372   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:23.362139   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:23.365983   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:23.805758   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:23.811989   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:23.860249   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:23.864353   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:24.251752   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:24.306423   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:24.312562   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:24.359808   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:24.364701   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:24.806552   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:24.813154   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:24.860008   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:24.864400   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:25.306763   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:25.312685   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:25.360634   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:25.365789   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:25.806827   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:25.812288   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:25.860904   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:25.864383   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:26.252006   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:26.305869   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:26.313315   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:26.359588   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:26.367081   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:26.806789   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:26.812132   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:26.860186   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:26.864986   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:27.306197   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:27.311840   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:27.359811   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:27.364956   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:27.806045   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:27.812688   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:27.868509   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:27.869679   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:28.307004   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:28.312918   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:28.360027   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:28.365119   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:28.750789   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:28.805831   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:28.811983   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:28.860500   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:28.865772   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:29.306856   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:29.312241   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:29.360836   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:29.366530   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:29.805623   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:29.812354   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:29.859147   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:29.864702   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:30.307855   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:30.326565   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:30.359566   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:30.370380   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:31.077878   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:31.080047   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:31.083255   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:31.084975   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:31.089823   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:31.308344   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:31.313651   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:31.359813   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:31.365225   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:31.806569   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:31.812748   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:31.861278   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:31.866519   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:32.307301   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:32.311370   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:32.361552   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:32.364389   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:32.807760   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:32.811700   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:32.859827   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:32.866330   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:33.252374   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:33.306430   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:33.311872   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:33.359678   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:33.364927   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:33.807100   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:33.812600   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:33.860311   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:33.865000   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:34.306787   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:34.311853   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:34.359961   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:34.364692   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:34.806393   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:34.811991   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:34.859930   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:34.864888   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:35.306786   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:35.311763   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:35.359665   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:35.364726   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:35.751068   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:35.807210   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:35.811423   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:35.860106   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:35.864127   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:36.307479   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:36.312370   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:36.359803   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:36.365124   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:36.806583   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:36.811706   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:36.861183   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:36.864113   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:37.306130   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:37.313113   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:37.360310   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:37.364514   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:37.756803   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:37.805999   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:37.812686   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:37.859688   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:37.865281   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:38.306141   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:38.316002   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:38.361127   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:38.365862   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:38.806792   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:38.812037   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:38.861468   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:38.865474   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:39.306331   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:39.312653   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:39.359974   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:39.364073   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:39.806055   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:39.812986   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:39.860426   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:39.864656   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:40.251034   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:40.306282   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:40.311993   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:40.359435   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:40.364513   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:41.129040   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:41.129717   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:41.130523   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:41.131077   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:41.306033   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:41.312532   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:41.361234   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:41.364475   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:41.805876   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:41.812230   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:41.860067   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:41.863862   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:42.251270   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:42.306130   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:42.312572   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:42.359325   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:42.364507   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:42.806151   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:42.813663   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:42.859408   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:42.864622   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:43.305522   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:43.311906   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:43.359435   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:43.364848   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:43.807001   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:43.812321   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:43.859774   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:43.865137   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:44.252053   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:44.314995   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:44.315081   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:44.361146   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:44.366486   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:44.813825   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:44.823559   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:44.860668   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:44.866930   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:45.306711   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:45.313084   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:45.360393   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:45.365595   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:45.808364   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:45.811272   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:45.860403   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:45.866280   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:46.255616   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:46.307104   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:46.316214   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:46.363096   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:46.365630   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:46.805929   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:46.812569   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:46.859419   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:46.864700   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:47.306238   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:47.315922   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:47.360741   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:47.364941   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:47.806245   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:47.817534   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:47.860546   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:47.864893   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:48.306663   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:48.312287   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:48.360173   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:48.364253   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:48.751846   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:48.806130   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:48.812581   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:48.860349   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:48.866901   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:49.306647   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:49.313375   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:49.359742   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:49.372608   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:49.809936   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:49.813659   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:49.859111   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:49.864206   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:50.305912   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:50.312908   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:50.359922   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:50.364282   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:50.752228   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:50.806394   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:50.812431   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:50.861240   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:50.864570   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:51.306781   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:51.312573   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:51.359919   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:51.364686   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:51.807183   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:51.812712   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:51.860298   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:51.865218   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:52.308033   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:52.312731   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:52.359736   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:52.365230   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:52.755882   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:52.806774   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:52.812371   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:52.934557   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:52.939011   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:53.306909   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:53.312510   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:53.360836   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:53.365407   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:53.806351   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:53.812266   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:53.859920   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:53.865413   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:54.307563   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:54.311753   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:54.360130   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:54.364760   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:54.806126   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:54.814788   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:54.859342   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:54.864643   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:55.511452   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:55.512347   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:55.512831   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:55.515177   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:55.517656   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:55.806785   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:55.814232   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:55.859880   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:55.868423   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:56.305798   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:56.311940   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:56.360245   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:56.364814   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:56.807745   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:56.819274   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:56.860458   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:56.865098   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:57.307296   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:57.311692   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:57.359863   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:57.365489   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:57.752390   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:57.806529   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:57.812736   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:57.859925   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:57.865425   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:58.306572   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:58.313896   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:58.360112   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:58.366091   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:58.806133   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:58.812734   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:58.859731   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:58.865188   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:59.312259   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:59.316925   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:59.361097   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:59.365586   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:59.806668   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:59.814608   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:59.860182   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:59.864138   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:00.251718   14407 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"True"
	I0425 18:34:00.251744   14407 pod_ready.go:81] duration metric: took 47.007105611s for pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace to be "Ready" ...
	I0425 18:34:00.251761   14407 pod_ready.go:38] duration metric: took 48.121235014s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:34:00.251779   14407 api_server.go:52] waiting for apiserver process to appear ...
	I0425 18:34:00.251829   14407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:34:00.270545   14407 api_server.go:72] duration metric: took 54.326488387s to wait for apiserver process to appear ...
	I0425 18:34:00.270582   14407 api_server.go:88] waiting for apiserver healthz status ...
	I0425 18:34:00.270604   14407 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0425 18:34:00.274815   14407 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I0425 18:34:00.275915   14407 api_server.go:141] control plane version: v1.30.0
	I0425 18:34:00.275938   14407 api_server.go:131] duration metric: took 5.347958ms to wait for apiserver health ...
	I0425 18:34:00.275949   14407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 18:34:00.285339   14407 system_pods.go:59] 18 kube-system pods found
	I0425 18:34:00.285371   14407 system_pods.go:61] "coredns-7db6d8ff4d-6wpfr" [a4f7208b-0870-4a3c-bb2e-e6ad6d87404b] Running
	I0425 18:34:00.285382   14407 system_pods.go:61] "csi-hostpath-attacher-0" [c938c096-1833-4f10-b4fc-27cda6579f8b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0425 18:34:00.285390   14407 system_pods.go:61] "csi-hostpath-resizer-0" [e4a15e27-1979-40da-a400-a7fc1b6fe78c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0425 18:34:00.285401   14407 system_pods.go:61] "csi-hostpathplugin-fprlv" [b9e25dba-dbbc-46ee-be05-349125de51e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0425 18:34:00.285408   14407 system_pods.go:61] "etcd-addons-477322" [e6e3f83f-3036-4a38-8c6b-2a64085baec5] Running
	I0425 18:34:00.285413   14407 system_pods.go:61] "kube-apiserver-addons-477322" [d33f75a1-63a3-4dd6-b700-c6df57e50bed] Running
	I0425 18:34:00.285419   14407 system_pods.go:61] "kube-controller-manager-addons-477322" [dda70622-1ef9-4f3f-8e04-d40e44885694] Running
	I0425 18:34:00.285426   14407 system_pods.go:61] "kube-ingress-dns-minikube" [c2b29e86-902f-43bc-95db-5900cc3f5725] Running
	I0425 18:34:00.285434   14407 system_pods.go:61] "kube-proxy-rgvqp" [aa79ab2f-3125-426d-a63a-8dba44e5e06c] Running
	I0425 18:34:00.285439   14407 system_pods.go:61] "kube-scheduler-addons-477322" [0e99db52-9c82-4715-a6a2-dc9e90dcb9c1] Running
	I0425 18:34:00.285454   14407 system_pods.go:61] "metrics-server-c59844bb4-bw7rc" [5e6ef0c9-2d28-429e-a92f-7bb24314635d] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 18:34:00.285462   14407 system_pods.go:61] "nvidia-device-plugin-daemonset-4tmhd" [e5294b6c-a965-4df2-8c07-1696d3c1ea57] Running
	I0425 18:34:00.285472   14407 system_pods.go:61] "registry-proxy-vcjwf" [daff0d5c-8ea3-43fd-948e-5ac439d1a5a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0425 18:34:00.285485   14407 system_pods.go:61] "registry-wf47l" [0d3a67d8-466b-42fa-8b7b-e306fee91c84] Running
	I0425 18:34:00.285496   14407 system_pods.go:61] "snapshot-controller-745499f584-8fj49" [bb9e98cb-566f-4856-a7c4-5ae8da1442f4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0425 18:34:00.285507   14407 system_pods.go:61] "snapshot-controller-745499f584-q6cdl" [8f39480c-bcbe-4ed0-8f86-c5afca6fda25] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0425 18:34:00.285516   14407 system_pods.go:61] "storage-provisioner" [930ba2a2-a45e-4db3-9e58-f57677e70097] Running
	I0425 18:34:00.285527   14407 system_pods.go:61] "tiller-deploy-6677d64bcd-dkd7m" [aa079112-30fb-4401-9271-cf4059a1c2ce] Running
	I0425 18:34:00.285537   14407 system_pods.go:74] duration metric: took 9.579541ms to wait for pod list to return data ...
	I0425 18:34:00.285550   14407 default_sa.go:34] waiting for default service account to be created ...
	I0425 18:34:00.287890   14407 default_sa.go:45] found service account: "default"
	I0425 18:34:00.287909   14407 default_sa.go:55] duration metric: took 2.349805ms for default service account to be created ...
	I0425 18:34:00.287917   14407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 18:34:00.296368   14407 system_pods.go:86] 18 kube-system pods found
	I0425 18:34:00.296395   14407 system_pods.go:89] "coredns-7db6d8ff4d-6wpfr" [a4f7208b-0870-4a3c-bb2e-e6ad6d87404b] Running
	I0425 18:34:00.296403   14407 system_pods.go:89] "csi-hostpath-attacher-0" [c938c096-1833-4f10-b4fc-27cda6579f8b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0425 18:34:00.296412   14407 system_pods.go:89] "csi-hostpath-resizer-0" [e4a15e27-1979-40da-a400-a7fc1b6fe78c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0425 18:34:00.296423   14407 system_pods.go:89] "csi-hostpathplugin-fprlv" [b9e25dba-dbbc-46ee-be05-349125de51e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0425 18:34:00.296440   14407 system_pods.go:89] "etcd-addons-477322" [e6e3f83f-3036-4a38-8c6b-2a64085baec5] Running
	I0425 18:34:00.296447   14407 system_pods.go:89] "kube-apiserver-addons-477322" [d33f75a1-63a3-4dd6-b700-c6df57e50bed] Running
	I0425 18:34:00.296457   14407 system_pods.go:89] "kube-controller-manager-addons-477322" [dda70622-1ef9-4f3f-8e04-d40e44885694] Running
	I0425 18:34:00.296464   14407 system_pods.go:89] "kube-ingress-dns-minikube" [c2b29e86-902f-43bc-95db-5900cc3f5725] Running
	I0425 18:34:00.296474   14407 system_pods.go:89] "kube-proxy-rgvqp" [aa79ab2f-3125-426d-a63a-8dba44e5e06c] Running
	I0425 18:34:00.296481   14407 system_pods.go:89] "kube-scheduler-addons-477322" [0e99db52-9c82-4715-a6a2-dc9e90dcb9c1] Running
	I0425 18:34:00.296492   14407 system_pods.go:89] "metrics-server-c59844bb4-bw7rc" [5e6ef0c9-2d28-429e-a92f-7bb24314635d] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 18:34:00.296499   14407 system_pods.go:89] "nvidia-device-plugin-daemonset-4tmhd" [e5294b6c-a965-4df2-8c07-1696d3c1ea57] Running
	I0425 18:34:00.296507   14407 system_pods.go:89] "registry-proxy-vcjwf" [daff0d5c-8ea3-43fd-948e-5ac439d1a5a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0425 18:34:00.296514   14407 system_pods.go:89] "registry-wf47l" [0d3a67d8-466b-42fa-8b7b-e306fee91c84] Running
	I0425 18:34:00.296520   14407 system_pods.go:89] "snapshot-controller-745499f584-8fj49" [bb9e98cb-566f-4856-a7c4-5ae8da1442f4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0425 18:34:00.296529   14407 system_pods.go:89] "snapshot-controller-745499f584-q6cdl" [8f39480c-bcbe-4ed0-8f86-c5afca6fda25] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0425 18:34:00.296537   14407 system_pods.go:89] "storage-provisioner" [930ba2a2-a45e-4db3-9e58-f57677e70097] Running
	I0425 18:34:00.296548   14407 system_pods.go:89] "tiller-deploy-6677d64bcd-dkd7m" [aa079112-30fb-4401-9271-cf4059a1c2ce] Running
	I0425 18:34:00.296563   14407 system_pods.go:126] duration metric: took 8.637829ms to wait for k8s-apps to be running ...
	I0425 18:34:00.296575   14407 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 18:34:00.296622   14407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:34:00.306733   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:00.312479   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:00.315772   14407 system_svc.go:56] duration metric: took 19.187649ms WaitForService to wait for kubelet
	I0425 18:34:00.315804   14407 kubeadm.go:576] duration metric: took 54.371751122s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 18:34:00.315829   14407 node_conditions.go:102] verifying NodePressure condition ...
	I0425 18:34:00.319177   14407 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:34:00.319205   14407 node_conditions.go:123] node cpu capacity is 2
	I0425 18:34:00.319227   14407 node_conditions.go:105] duration metric: took 3.391731ms to run NodePressure ...
	I0425 18:34:00.319242   14407 start.go:240] waiting for startup goroutines ...
	I0425 18:34:00.360096   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:00.364692   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:00.806737   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:00.812043   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:00.859342   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:00.864106   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:01.306725   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:01.313443   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:01.361349   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:01.365136   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:01.807564   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:01.812114   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:01.862167   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:01.868840   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:02.307469   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:02.315552   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:02.360564   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:02.364745   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:02.807056   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:02.812847   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:02.860950   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:02.866181   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:03.307223   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:03.312617   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:03.361998   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:03.368475   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:04.104946   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:04.105616   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:04.107070   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:04.111038   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:04.306617   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:04.312127   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:04.360369   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:04.365178   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:04.807494   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:04.811947   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:04.859974   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:04.865477   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:05.305602   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:05.314081   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:05.359686   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:05.365333   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:05.806523   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:05.811908   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:05.860061   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:05.865279   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:06.306180   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:06.312458   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:06.361558   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:06.364412   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:06.806992   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:06.813035   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:06.860760   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:06.865315   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:07.305798   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:07.316188   14407 kapi.go:107] duration metric: took 51.50865662s to wait for kubernetes.io/minikube-addons=registry ...
	I0425 18:34:07.359932   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:07.365341   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:07.809873   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:07.859303   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:07.864569   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:08.306160   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:08.361064   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:08.364880   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:08.806891   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:08.860301   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:08.864691   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:09.308007   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:09.360115   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:09.364478   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:09.806603   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:09.860328   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:09.864397   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:10.306521   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:10.361128   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:10.365356   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:10.806904   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:11.121012   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:11.124533   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:11.306163   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:11.360193   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:11.367460   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:11.806250   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:11.859568   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:11.864725   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:12.307041   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:12.366629   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:12.371470   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:12.809224   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:12.859811   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:12.866274   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:13.311932   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:13.365770   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:13.365956   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:13.807011   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:13.862337   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:13.864775   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:14.310298   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:14.360874   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:14.366318   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:14.806329   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:14.860534   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:14.864894   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:15.306916   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:15.360769   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:15.365925   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:15.807435   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:15.861298   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:15.864563   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:16.307602   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:16.359046   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:16.364408   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:16.806000   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:16.859769   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:16.865002   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:17.307318   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:17.361315   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:17.364687   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:17.807242   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:17.860090   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:17.865472   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:18.312112   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:18.371877   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:18.377610   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:18.806692   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:18.860861   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:18.866381   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:19.307751   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:19.367438   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:19.370470   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:19.810987   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:19.869722   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:19.869739   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:20.307136   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:20.374680   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:20.380618   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:20.806852   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:20.860783   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:20.866087   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:21.306886   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:21.360466   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:21.366366   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:21.806323   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:21.861793   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:21.864599   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:22.308163   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:22.360043   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:22.364956   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:22.809017   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:22.861876   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:22.870617   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:23.307206   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:23.360316   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:23.364544   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:23.806328   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:23.860632   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:23.864903   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:24.307177   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:24.359761   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:24.364928   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:24.806591   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:24.860107   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:24.864314   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:25.306005   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:25.359370   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:25.364256   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:25.806866   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:25.859490   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:25.865047   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:26.307181   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:26.360198   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:26.364619   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:26.806180   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:26.872441   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:26.876768   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:27.313347   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:27.363339   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:27.365093   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:27.807145   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:27.861674   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:27.865838   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:28.407125   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:28.408050   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:28.408199   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:28.809788   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:28.860317   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:28.864387   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:29.306649   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:29.359719   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:29.365149   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:29.807437   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:29.859530   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:29.864627   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:30.309553   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:30.359859   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:30.366649   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:30.805776   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:30.859397   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:30.867185   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:31.311124   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:31.362072   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:31.367818   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:31.806630   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:31.866808   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:31.870755   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:32.307156   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:32.359375   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:32.364421   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:32.807597   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:32.860267   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:32.864953   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:33.306962   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:33.359151   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:33.364105   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:33.806703   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:33.860422   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:33.864879   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:34.306867   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:34.359176   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:34.364408   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:34.806420   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:34.862008   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:34.866167   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:35.305471   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:35.360384   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:35.364338   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:35.809549   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:35.860043   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:35.865689   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:36.305933   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:36.359334   14407 kapi.go:107] duration metric: took 1m18.505375184s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0425 18:34:36.364546   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:36.807249   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:36.871233   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:37.309131   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:37.365660   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:37.807885   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:37.865508   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:38.308137   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:38.368739   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:38.807923   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:38.865936   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:39.306854   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:39.364848   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:39.806737   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:39.866482   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:40.308028   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:40.365882   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:40.806527   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:40.864847   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:41.306511   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:41.365913   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:41.807151   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:41.865524   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:42.306738   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:42.365921   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:42.807057   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:42.864877   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:43.306555   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:43.365507   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:43.806598   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:43.866087   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:44.307840   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:44.366171   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:44.810132   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:44.865266   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:45.306562   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:45.365994   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:45.806963   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:45.865393   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:46.305783   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:46.365878   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:46.807216   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:46.865339   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:47.306367   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:47.366450   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:47.806294   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:47.864823   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:48.306729   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:48.365634   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:48.807383   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:48.865239   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:49.306850   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:49.365072   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:49.810818   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:49.865075   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:50.307705   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:50.365945   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:50.806981   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:50.864486   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:51.306258   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:51.365201   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:51.806916   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:51.867838   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:52.306821   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:52.368009   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:52.807000   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:52.864898   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:53.309341   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:53.365827   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:53.806469   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:53.865439   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:54.307587   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:54.366182   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:54.927853   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:54.928078   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:55.306735   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:55.367364   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:55.809842   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:55.864973   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:56.307018   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:56.366531   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:56.807133   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:56.865391   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:57.306582   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:57.365713   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:57.807572   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:57.868704   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:58.307846   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:58.365006   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:58.807938   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:58.865004   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:59.306201   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:59.366560   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:59.807210   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:59.864847   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:00.307661   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:00.366465   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:00.807482   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:00.868125   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:01.307445   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:01.365271   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:01.807892   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:01.864985   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:02.309153   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:02.365939   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:02.807272   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:02.865205   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:03.306478   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:03.365547   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:03.806104   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:03.866176   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:04.306168   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:04.365975   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:04.806382   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:04.865203   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:05.307578   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:05.365569   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:05.805993   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:05.865381   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:06.310289   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:06.365837   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:06.807014   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:06.865950   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:07.311834   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:07.365227   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:07.806929   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:07.865172   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:08.308029   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:08.365869   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:08.807050   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:08.866054   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:09.306924   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:09.365483   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:09.808720   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:09.865797   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:10.306760   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:10.365749   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:10.806402   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:10.865486   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:11.307956   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:11.365171   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:11.805830   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:11.865773   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:12.310568   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:12.367173   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:12.805884   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:12.866403   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:13.306611   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:13.366622   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:13.806615   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:13.867732   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:14.307873   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:14.364845   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:14.807133   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:14.865193   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:15.310876   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:15.364714   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:15.806474   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:15.865675   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:16.309504   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:16.365575   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:16.808066   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:16.865237   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:17.307402   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:17.365814   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:17.807576   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:17.865163   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:18.319725   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:18.365677   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:18.806219   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:18.867245   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:19.312191   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:19.364944   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:19.807177   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:19.865272   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:20.307136   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:20.365481   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:20.806683   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:20.865401   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:21.306563   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:21.365464   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:21.806392   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:21.865349   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:22.305649   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:22.365508   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:22.808896   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:22.865694   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:23.306594   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:23.365382   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:23.808202   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:23.865277   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:24.307158   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:24.365158   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:24.809807   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:24.865696   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:25.307077   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:25.365756   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:25.806788   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:25.865146   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:26.307448   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:26.366090   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:26.807105   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:26.864963   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:27.306553   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:27.365597   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:27.812348   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:27.865673   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:28.306252   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:28.365228   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:28.805953   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:28.865568   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:29.306063   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:29.365233   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:29.806004   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:29.864426   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:30.306814   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:30.364265   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:30.805902   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:30.865324   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:31.306873   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:31.364938   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:31.807208   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:31.864636   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:32.306992   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:32.365388   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:32.806288   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:32.864628   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:33.307678   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:33.368337   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:33.806112   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:33.864882   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:34.306220   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:34.365791   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:34.806472   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:34.865427   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:35.305645   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:35.365235   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:35.805669   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:35.865199   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:36.305614   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:36.365835   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:36.806171   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:36.865512   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:37.305942   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:37.364547   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:37.806117   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:37.864480   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:38.306067   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:38.364789   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:38.806552   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:38.865633   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:39.310910   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:39.364635   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:39.807596   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:39.868626   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:40.306633   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:40.366003   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:40.805892   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:40.864118   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:41.306236   14407 kapi.go:107] duration metric: took 2m25.504906965s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0425 18:35:41.365468   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:41.864944   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:42.367399   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:42.865902   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:43.365735   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:44.038176   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:44.365493   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:44.864950   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:45.364588   14407 kapi.go:107] duration metric: took 2m25.503414037s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0425 18:35:45.366384   14407 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-477322 cluster.
	I0425 18:35:45.367725   14407 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0425 18:35:45.369101   14407 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0425 18:35:45.370597   14407 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, metrics-server, helm-tiller, ingress-dns, inspektor-gadget, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0425 18:35:45.371895   14407 addons.go:505] duration metric: took 2m39.427770005s for enable addons: enabled=[nvidia-device-plugin storage-provisioner-rancher storage-provisioner metrics-server helm-tiller ingress-dns inspektor-gadget cloud-spanner yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0425 18:35:45.371936   14407 start.go:245] waiting for cluster config update ...
	I0425 18:35:45.371957   14407 start.go:254] writing updated cluster config ...
	I0425 18:35:45.372197   14407 ssh_runner.go:195] Run: rm -f paused
	I0425 18:35:45.423823   14407 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 18:35:45.425178   14407 out.go:177] * Done! kubectl is now configured to use "addons-477322" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.738824769Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714070337738795213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579092,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=075b11cc-5461-4deb-b2ee-eab0208605a7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.739710388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad4424f1-66b7-4070-8f5b-8b53b07ced1f name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.739763665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad4424f1-66b7-4070-8f5b-8b53b07ced1f name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.740036642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ccb67610efb69bc365548edb3198a2dff3a42514865ab1033b33e7f7b5c90af,PodSandboxId:7d4bc39231a790c6b454e328ee9ca88553ff3d167528fe0a2baf513490142817,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714070331473706014,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-nstfm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88ef2b0e-e7d8-48d4-b29b-658685abefae,},Annotations:map[string]string{io.kubernetes.container.hash: ba810db,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402c7e90494399d2feeaa235e691145866b2725e37aa478f5804487a743ac56d,PodSandboxId:105e8c1342c557b597d234bbc587695ba49b3c540dbe40ac0c65b9342cca3c2f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714070191079882306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 174bca0d-e34d-4acf-8cb7-74f929b70346,},Annotations:map[string]string{io.kuberne
tes.container.hash: 73757bdd,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e423af13f38271273791d9ffaaba540df7d18373a078a69cd5a8ffe096ab0c6,PodSandboxId:5e588693571d85f475e2522defcd89fa2b3eb4972947ef0afebf135f7ddc22e2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714070168169033617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-4hdvs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: b0b1c3bf-f2b2-4b6a-ba59-104181e36d01,},Annotations:map[string]string{io.kubernetes.container.hash: c3244dca,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444cf98d597b26ac307437fc04a6576f39c4ddc200c2eeb2e0444204f26594e7,PodSandboxId:a90eb47d1f5c3908965f516b8db8a75cc1a875de777df4706de32481860f2794,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714070144465611562,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fmcbp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8ed5953a-1f88-4b6d-abba-be0571627016,},Annotations:map[string]string{io.kubernetes.container.hash: dbc4a0ba,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df04bc6e0a8645fec759d27fa1ffcc26a8380f5ad630eeba571a082084dfe0cf,PodSandboxId:82d5a69e4c3c29ae7933af38b110fb706734a0d466fa1fc222a57a98f99d5387,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171407
0052690182505,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-z4ljv,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3df1cc7b-c249-4597-b8c9-3a9b4bc48222,},Annotations:map[string]string{io.kubernetes.container.hash: 27dec842,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c,PodSandboxId:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714070000274234940,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bw7rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,},Annotations:map[string]string{io.kubernetes.container.hash: 6ea71e24,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a581b2bef974518ff15839d7127b97175c6ca2c11630a8877145f8e707dacfa,PodSandboxId:4a825f45bb82f480f19c760f92f5fb3d1cd992a4a2a5607cf40300022a7a04bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714069995016203233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930ba2a2-a45e-4db3-9e58-f57677e70097,},Annotations:map[string]string{io.kubernetes.container.hash: f492499d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04a27897034cedb321fa5f06387e220bd535ffa851de1660e5098a7206068c5,PodSandboxId:c6282053a094a8dd1a76c99595926343e07c5331a83796e173f5d3fdaf89494e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714069989920485509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6wpfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4f7208b-0870-4a3c-bb2e-e6ad6d87404b,},Annotations:map[string]string{io.kubernetes.container.hash: 7416a455,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d13c42367e56a88594713117ba450b13bde86d14fdd1911ed31bcae79c6255,PodSand
boxId:3c411906655780331b0753e2372b30e75495c6fd8632c325dc411fb29f55f4e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714069986854907049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgvqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa79ab2f-3125-426d-a63a-8dba44e5e06c,},Annotations:map[string]string{io.kubernetes.container.hash: a2478d34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba098b391087ab69c154d60e93cdbca9709dae3e860e358078373ea832309cad,PodSandboxId:c9baef8b5a1b164f4c9c26b4322
97e34f97ec6569ead5e5a61f84c686cace732,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714069966399786701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9ea0a35cb7ac41978bfcc3c445f98ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5bfc3a10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9db646bf6dbf0e9d7d21d563363f55428cc69781ff0b871042fc82cd43a56d,PodSandboxId:51bd1af867d66ae37df43e25a0d4fa0940a5273537029b7bbc608342f253ffc6,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714069966287069223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1f6a44bb1fb2be1ae94c311e3fa409,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbbc3655cb9eee9c48e5c703032e6c66e0f3c1d8fe46c50b43c2e8e617986f7,PodSandboxId:05fdcdfc675f3db365c6e01088655c9ffc8b307f104d0e356bd3034d2a6c2397,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714069966335430671,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad0cd299b604c07a812a0bc88262082,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ce0b80f86d5e85292f94da6f1cd5d7db205853dfcfe415aa0059ccb450f83,PodSandboxId:9fc9b6d3e29c535836c0dabd618a8f703355936625aa638d0e448264019d0a04,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714069966258899721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de2573bdfcfa3e02e7bc88b90313a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: c53b7525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad4424f1-66b7-4070-8f5b-8b53b07ced1f name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.788781982Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce562865-d8e1-4d3a-a9a2-ad0b1adfc0f8 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.788886293Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce562865-d8e1-4d3a-a9a2-ad0b1adfc0f8 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.790567415Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09004c4c-2014-4cb9-9313-f722f0fb0db6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.791844462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714070337791816297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579092,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09004c4c-2014-4cb9-9313-f722f0fb0db6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.792721586Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7438a43a-e364-4557-b7ac-1b2b07d7d00f name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.792780305Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7438a43a-e364-4557-b7ac-1b2b07d7d00f name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.793073660Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ccb67610efb69bc365548edb3198a2dff3a42514865ab1033b33e7f7b5c90af,PodSandboxId:7d4bc39231a790c6b454e328ee9ca88553ff3d167528fe0a2baf513490142817,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714070331473706014,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-nstfm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88ef2b0e-e7d8-48d4-b29b-658685abefae,},Annotations:map[string]string{io.kubernetes.container.hash: ba810db,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402c7e90494399d2feeaa235e691145866b2725e37aa478f5804487a743ac56d,PodSandboxId:105e8c1342c557b597d234bbc587695ba49b3c540dbe40ac0c65b9342cca3c2f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714070191079882306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 174bca0d-e34d-4acf-8cb7-74f929b70346,},Annotations:map[string]string{io.kuberne
tes.container.hash: 73757bdd,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e423af13f38271273791d9ffaaba540df7d18373a078a69cd5a8ffe096ab0c6,PodSandboxId:5e588693571d85f475e2522defcd89fa2b3eb4972947ef0afebf135f7ddc22e2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714070168169033617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-4hdvs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: b0b1c3bf-f2b2-4b6a-ba59-104181e36d01,},Annotations:map[string]string{io.kubernetes.container.hash: c3244dca,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444cf98d597b26ac307437fc04a6576f39c4ddc200c2eeb2e0444204f26594e7,PodSandboxId:a90eb47d1f5c3908965f516b8db8a75cc1a875de777df4706de32481860f2794,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714070144465611562,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fmcbp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8ed5953a-1f88-4b6d-abba-be0571627016,},Annotations:map[string]string{io.kubernetes.container.hash: dbc4a0ba,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df04bc6e0a8645fec759d27fa1ffcc26a8380f5ad630eeba571a082084dfe0cf,PodSandboxId:82d5a69e4c3c29ae7933af38b110fb706734a0d466fa1fc222a57a98f99d5387,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171407
0052690182505,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-z4ljv,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3df1cc7b-c249-4597-b8c9-3a9b4bc48222,},Annotations:map[string]string{io.kubernetes.container.hash: 27dec842,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c,PodSandboxId:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714070000274234940,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bw7rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,},Annotations:map[string]string{io.kubernetes.container.hash: 6ea71e24,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a581b2bef974518ff15839d7127b97175c6ca2c11630a8877145f8e707dacfa,PodSandboxId:4a825f45bb82f480f19c760f92f5fb3d1cd992a4a2a5607cf40300022a7a04bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714069995016203233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930ba2a2-a45e-4db3-9e58-f57677e70097,},Annotations:map[string]string{io.kubernetes.container.hash: f492499d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04a27897034cedb321fa5f06387e220bd535ffa851de1660e5098a7206068c5,PodSandboxId:c6282053a094a8dd1a76c99595926343e07c5331a83796e173f5d3fdaf89494e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714069989920485509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6wpfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4f7208b-0870-4a3c-bb2e-e6ad6d87404b,},Annotations:map[string]string{io.kubernetes.container.hash: 7416a455,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d13c42367e56a88594713117ba450b13bde86d14fdd1911ed31bcae79c6255,PodSand
boxId:3c411906655780331b0753e2372b30e75495c6fd8632c325dc411fb29f55f4e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714069986854907049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgvqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa79ab2f-3125-426d-a63a-8dba44e5e06c,},Annotations:map[string]string{io.kubernetes.container.hash: a2478d34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba098b391087ab69c154d60e93cdbca9709dae3e860e358078373ea832309cad,PodSandboxId:c9baef8b5a1b164f4c9c26b4322
97e34f97ec6569ead5e5a61f84c686cace732,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714069966399786701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9ea0a35cb7ac41978bfcc3c445f98ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5bfc3a10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9db646bf6dbf0e9d7d21d563363f55428cc69781ff0b871042fc82cd43a56d,PodSandboxId:51bd1af867d66ae37df43e25a0d4fa0940a5273537029b7bbc608342f253ffc6,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714069966287069223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1f6a44bb1fb2be1ae94c311e3fa409,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbbc3655cb9eee9c48e5c703032e6c66e0f3c1d8fe46c50b43c2e8e617986f7,PodSandboxId:05fdcdfc675f3db365c6e01088655c9ffc8b307f104d0e356bd3034d2a6c2397,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714069966335430671,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad0cd299b604c07a812a0bc88262082,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ce0b80f86d5e85292f94da6f1cd5d7db205853dfcfe415aa0059ccb450f83,PodSandboxId:9fc9b6d3e29c535836c0dabd618a8f703355936625aa638d0e448264019d0a04,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714069966258899721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de2573bdfcfa3e02e7bc88b90313a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: c53b7525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7438a43a-e364-4557-b7ac-1b2b07d7d00f name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.842012858Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f764064-e424-44d8-a485-cdacf7d3b801 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.842346020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f764064-e424-44d8-a485-cdacf7d3b801 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.843711533Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b9a4288-e908-48ec-a50f-b3efb8a88b9f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.845082252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714070337845049908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579092,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b9a4288-e908-48ec-a50f-b3efb8a88b9f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.845774519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5681a5a-f5f5-4c16-ba4b-e2ce0b0edc41 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.845852131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5681a5a-f5f5-4c16-ba4b-e2ce0b0edc41 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.846112540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ccb67610efb69bc365548edb3198a2dff3a42514865ab1033b33e7f7b5c90af,PodSandboxId:7d4bc39231a790c6b454e328ee9ca88553ff3d167528fe0a2baf513490142817,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714070331473706014,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-nstfm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88ef2b0e-e7d8-48d4-b29b-658685abefae,},Annotations:map[string]string{io.kubernetes.container.hash: ba810db,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402c7e90494399d2feeaa235e691145866b2725e37aa478f5804487a743ac56d,PodSandboxId:105e8c1342c557b597d234bbc587695ba49b3c540dbe40ac0c65b9342cca3c2f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714070191079882306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 174bca0d-e34d-4acf-8cb7-74f929b70346,},Annotations:map[string]string{io.kuberne
tes.container.hash: 73757bdd,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e423af13f38271273791d9ffaaba540df7d18373a078a69cd5a8ffe096ab0c6,PodSandboxId:5e588693571d85f475e2522defcd89fa2b3eb4972947ef0afebf135f7ddc22e2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714070168169033617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-4hdvs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: b0b1c3bf-f2b2-4b6a-ba59-104181e36d01,},Annotations:map[string]string{io.kubernetes.container.hash: c3244dca,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444cf98d597b26ac307437fc04a6576f39c4ddc200c2eeb2e0444204f26594e7,PodSandboxId:a90eb47d1f5c3908965f516b8db8a75cc1a875de777df4706de32481860f2794,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714070144465611562,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fmcbp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8ed5953a-1f88-4b6d-abba-be0571627016,},Annotations:map[string]string{io.kubernetes.container.hash: dbc4a0ba,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df04bc6e0a8645fec759d27fa1ffcc26a8380f5ad630eeba571a082084dfe0cf,PodSandboxId:82d5a69e4c3c29ae7933af38b110fb706734a0d466fa1fc222a57a98f99d5387,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171407
0052690182505,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-z4ljv,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3df1cc7b-c249-4597-b8c9-3a9b4bc48222,},Annotations:map[string]string{io.kubernetes.container.hash: 27dec842,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c,PodSandboxId:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714070000274234940,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bw7rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,},Annotations:map[string]string{io.kubernetes.container.hash: 6ea71e24,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a581b2bef974518ff15839d7127b97175c6ca2c11630a8877145f8e707dacfa,PodSandboxId:4a825f45bb82f480f19c760f92f5fb3d1cd992a4a2a5607cf40300022a7a04bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714069995016203233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930ba2a2-a45e-4db3-9e58-f57677e70097,},Annotations:map[string]string{io.kubernetes.container.hash: f492499d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04a27897034cedb321fa5f06387e220bd535ffa851de1660e5098a7206068c5,PodSandboxId:c6282053a094a8dd1a76c99595926343e07c5331a83796e173f5d3fdaf89494e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714069989920485509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6wpfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4f7208b-0870-4a3c-bb2e-e6ad6d87404b,},Annotations:map[string]string{io.kubernetes.container.hash: 7416a455,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d13c42367e56a88594713117ba450b13bde86d14fdd1911ed31bcae79c6255,PodSand
boxId:3c411906655780331b0753e2372b30e75495c6fd8632c325dc411fb29f55f4e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714069986854907049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgvqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa79ab2f-3125-426d-a63a-8dba44e5e06c,},Annotations:map[string]string{io.kubernetes.container.hash: a2478d34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba098b391087ab69c154d60e93cdbca9709dae3e860e358078373ea832309cad,PodSandboxId:c9baef8b5a1b164f4c9c26b4322
97e34f97ec6569ead5e5a61f84c686cace732,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714069966399786701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9ea0a35cb7ac41978bfcc3c445f98ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5bfc3a10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9db646bf6dbf0e9d7d21d563363f55428cc69781ff0b871042fc82cd43a56d,PodSandboxId:51bd1af867d66ae37df43e25a0d4fa0940a5273537029b7bbc608342f253ffc6,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714069966287069223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1f6a44bb1fb2be1ae94c311e3fa409,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbbc3655cb9eee9c48e5c703032e6c66e0f3c1d8fe46c50b43c2e8e617986f7,PodSandboxId:05fdcdfc675f3db365c6e01088655c9ffc8b307f104d0e356bd3034d2a6c2397,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714069966335430671,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad0cd299b604c07a812a0bc88262082,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ce0b80f86d5e85292f94da6f1cd5d7db205853dfcfe415aa0059ccb450f83,PodSandboxId:9fc9b6d3e29c535836c0dabd618a8f703355936625aa638d0e448264019d0a04,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714069966258899721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de2573bdfcfa3e02e7bc88b90313a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: c53b7525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5681a5a-f5f5-4c16-ba4b-e2ce0b0edc41 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.890704713Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0fa3350b-7c46-49b3-a55d-48f479499398 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.890818412Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0fa3350b-7c46-49b3-a55d-48f479499398 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.892422552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76dababc-1f63-423c-99f4-4836e8bed7a3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.893712810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714070337893683996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579092,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76dababc-1f63-423c-99f4-4836e8bed7a3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.894626225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75121f95-5263-43ea-ac66-59280019b1d4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.894704885Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75121f95-5263-43ea-ac66-59280019b1d4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:38:57 addons-477322 crio[681]: time="2024-04-25 18:38:57.894998288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ccb67610efb69bc365548edb3198a2dff3a42514865ab1033b33e7f7b5c90af,PodSandboxId:7d4bc39231a790c6b454e328ee9ca88553ff3d167528fe0a2baf513490142817,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714070331473706014,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-nstfm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88ef2b0e-e7d8-48d4-b29b-658685abefae,},Annotations:map[string]string{io.kubernetes.container.hash: ba810db,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402c7e90494399d2feeaa235e691145866b2725e37aa478f5804487a743ac56d,PodSandboxId:105e8c1342c557b597d234bbc587695ba49b3c540dbe40ac0c65b9342cca3c2f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714070191079882306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 174bca0d-e34d-4acf-8cb7-74f929b70346,},Annotations:map[string]string{io.kuberne
tes.container.hash: 73757bdd,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e423af13f38271273791d9ffaaba540df7d18373a078a69cd5a8ffe096ab0c6,PodSandboxId:5e588693571d85f475e2522defcd89fa2b3eb4972947ef0afebf135f7ddc22e2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714070168169033617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-4hdvs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: b0b1c3bf-f2b2-4b6a-ba59-104181e36d01,},Annotations:map[string]string{io.kubernetes.container.hash: c3244dca,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444cf98d597b26ac307437fc04a6576f39c4ddc200c2eeb2e0444204f26594e7,PodSandboxId:a90eb47d1f5c3908965f516b8db8a75cc1a875de777df4706de32481860f2794,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714070144465611562,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fmcbp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8ed5953a-1f88-4b6d-abba-be0571627016,},Annotations:map[string]string{io.kubernetes.container.hash: dbc4a0ba,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df04bc6e0a8645fec759d27fa1ffcc26a8380f5ad630eeba571a082084dfe0cf,PodSandboxId:82d5a69e4c3c29ae7933af38b110fb706734a0d466fa1fc222a57a98f99d5387,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171407
0052690182505,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-z4ljv,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3df1cc7b-c249-4597-b8c9-3a9b4bc48222,},Annotations:map[string]string{io.kubernetes.container.hash: 27dec842,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c,PodSandboxId:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714070000274234940,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bw7rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,},Annotations:map[string]string{io.kubernetes.container.hash: 6ea71e24,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a581b2bef974518ff15839d7127b97175c6ca2c11630a8877145f8e707dacfa,PodSandboxId:4a825f45bb82f480f19c760f92f5fb3d1cd992a4a2a5607cf40300022a7a04bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714069995016203233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930ba2a2-a45e-4db3-9e58-f57677e70097,},Annotations:map[string]string{io.kubernetes.container.hash: f492499d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04a27897034cedb321fa5f06387e220bd535ffa851de1660e5098a7206068c5,PodSandboxId:c6282053a094a8dd1a76c99595926343e07c5331a83796e173f5d3fdaf89494e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714069989920485509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6wpfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4f7208b-0870-4a3c-bb2e-e6ad6d87404b,},Annotations:map[string]string{io.kubernetes.container.hash: 7416a455,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d13c42367e56a88594713117ba450b13bde86d14fdd1911ed31bcae79c6255,PodSand
boxId:3c411906655780331b0753e2372b30e75495c6fd8632c325dc411fb29f55f4e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714069986854907049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgvqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa79ab2f-3125-426d-a63a-8dba44e5e06c,},Annotations:map[string]string{io.kubernetes.container.hash: a2478d34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba098b391087ab69c154d60e93cdbca9709dae3e860e358078373ea832309cad,PodSandboxId:c9baef8b5a1b164f4c9c26b4322
97e34f97ec6569ead5e5a61f84c686cace732,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714069966399786701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9ea0a35cb7ac41978bfcc3c445f98ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5bfc3a10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9db646bf6dbf0e9d7d21d563363f55428cc69781ff0b871042fc82cd43a56d,PodSandboxId:51bd1af867d66ae37df43e25a0d4fa0940a5273537029b7bbc608342f253ffc6,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714069966287069223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1f6a44bb1fb2be1ae94c311e3fa409,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbbc3655cb9eee9c48e5c703032e6c66e0f3c1d8fe46c50b43c2e8e617986f7,PodSandboxId:05fdcdfc675f3db365c6e01088655c9ffc8b307f104d0e356bd3034d2a6c2397,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714069966335430671,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad0cd299b604c07a812a0bc88262082,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ce0b80f86d5e85292f94da6f1cd5d7db205853dfcfe415aa0059ccb450f83,PodSandboxId:9fc9b6d3e29c535836c0dabd618a8f703355936625aa638d0e448264019d0a04,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714069966258899721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de2573bdfcfa3e02e7bc88b90313a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: c53b7525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75121f95-5263-43ea-ac66-59280019b1d4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5ccb67610efb6       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 6 seconds ago       Running             hello-world-app           0                   7d4bc39231a79       hello-world-app-86c47465fc-nstfm
	402c7e9049439       docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88                         2 minutes ago       Running             nginx                     0                   105e8c1342c55       nginx
	7e423af13f382       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                   2 minutes ago       Running             headlamp                  0                   5e588693571d8       headlamp-7559bf459f-4hdvs
	444cf98d597b2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            3 minutes ago       Running             gcp-auth                  0                   a90eb47d1f5c3       gcp-auth-5db96cd9b4-fmcbp
	df04bc6e0a864       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         4 minutes ago       Running             yakd                      0                   82d5a69e4c3c2       yakd-dashboard-5ddbf7d777-z4ljv
	1388e5efb882a       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   5 minutes ago       Running             metrics-server            0                   8a59b60c33812       metrics-server-c59844bb4-bw7rc
	8a581b2bef974       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        5 minutes ago       Running             storage-provisioner       0                   4a825f45bb82f       storage-provisioner
	b04a27897034c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        5 minutes ago       Running             coredns                   0                   c6282053a094a       coredns-7db6d8ff4d-6wpfr
	e5d13c42367e5       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                        5 minutes ago       Running             kube-proxy                0                   3c41190665578       kube-proxy-rgvqp
	ba098b391087a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        6 minutes ago       Running             etcd                      0                   c9baef8b5a1b1       etcd-addons-477322
	dcbbc3655cb9e       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                        6 minutes ago       Running             kube-controller-manager   0                   05fdcdfc675f3       kube-controller-manager-addons-477322
	7c9db646bf6db       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                        6 minutes ago       Running             kube-scheduler            0                   51bd1af867d66       kube-scheduler-addons-477322
	e91ce0b80f86d       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                        6 minutes ago       Running             kube-apiserver            0                   9fc9b6d3e29c5       kube-apiserver-addons-477322
	
	
	==> coredns [b04a27897034cedb321fa5f06387e220bd535ffa851de1660e5098a7206068c5] <==
	[INFO] 10.244.0.7:44463 - 66 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000804485s
	[INFO] 10.244.0.7:52059 - 303 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000174658s
	[INFO] 10.244.0.7:52059 - 27177 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091901s
	[INFO] 10.244.0.7:37509 - 42635 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093057s
	[INFO] 10.244.0.7:37509 - 45449 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096511s
	[INFO] 10.244.0.7:46568 - 34014 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000110061s
	[INFO] 10.244.0.7:46568 - 20703 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000177556s
	[INFO] 10.244.0.7:36374 - 64719 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000125838s
	[INFO] 10.244.0.7:36374 - 11722 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074259s
	[INFO] 10.244.0.7:47843 - 53516 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078397s
	[INFO] 10.244.0.7:47843 - 5170 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030202s
	[INFO] 10.244.0.7:55495 - 33669 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108891s
	[INFO] 10.244.0.7:55495 - 52103 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061179s
	[INFO] 10.244.0.7:54500 - 40811 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125009s
	[INFO] 10.244.0.7:54500 - 31850 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093013s
	[INFO] 10.244.0.22:34137 - 10931 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000554739s
	[INFO] 10.244.0.22:56080 - 9082 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00066313s
	[INFO] 10.244.0.22:40405 - 1108 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160768s
	[INFO] 10.244.0.22:35532 - 56427 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111907s
	[INFO] 10.244.0.22:40357 - 8383 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117509s
	[INFO] 10.244.0.22:60494 - 7978 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121966s
	[INFO] 10.244.0.22:36247 - 26665 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001445239s
	[INFO] 10.244.0.22:58557 - 39543 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001635333s
	[INFO] 10.244.0.25:50801 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0002597s
	[INFO] 10.244.0.25:43440 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000211661s
	
	
	==> describe nodes <==
	Name:               addons-477322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-477322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=addons-477322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T18_32_52_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-477322
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:32:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-477322
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 18:38:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 18:36:56 +0000   Thu, 25 Apr 2024 18:32:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 18:36:56 +0000   Thu, 25 Apr 2024 18:32:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 18:36:56 +0000   Thu, 25 Apr 2024 18:32:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 18:36:56 +0000   Thu, 25 Apr 2024 18:32:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    addons-477322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 beb887a3c48d42baab55e27f20912f96
	  System UUID:                beb887a3-c48d-42ba-ab55-e27f20912f96
	  Boot ID:                    9e9616d2-9083-4750-bf85-df17f463b7e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-nstfm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-5db96cd9b4-fmcbp                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  headlamp                    headlamp-7559bf459f-4hdvs                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                 coredns-7db6d8ff4d-6wpfr                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m53s
	  kube-system                 etcd-addons-477322                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-apiserver-addons-477322             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-controller-manager-addons-477322    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-proxy-rgvqp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 kube-scheduler-addons-477322             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 metrics-server-c59844bb4-bw7rc           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m46s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m47s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-z4ljv          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m50s  kube-proxy       
	  Normal  Starting                 6m7s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m7s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m7s   kubelet          Node addons-477322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m7s   kubelet          Node addons-477322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m7s   kubelet          Node addons-477322 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m6s   kubelet          Node addons-477322 status is now: NodeReady
	  Normal  RegisteredNode           5m54s  node-controller  Node addons-477322 event: Registered Node addons-477322 in Controller
	
	
	==> dmesg <==
	[  +0.155337] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.050926] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.157085] kauditd_printk_skb: 126 callbacks suppressed
	[  +6.969190] kauditd_printk_skb: 109 callbacks suppressed
	[ +13.238306] kauditd_printk_skb: 23 callbacks suppressed
	[ +22.635649] kauditd_printk_skb: 2 callbacks suppressed
	[Apr25 18:34] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.113415] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.556885] kauditd_printk_skb: 59 callbacks suppressed
	[  +6.234245] kauditd_printk_skb: 21 callbacks suppressed
	[Apr25 18:35] kauditd_printk_skb: 24 callbacks suppressed
	[ +15.417081] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.582374] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.349567] kauditd_printk_skb: 11 callbacks suppressed
	[ +12.406275] kauditd_printk_skb: 32 callbacks suppressed
	[Apr25 18:36] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.001322] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.036832] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.243491] kauditd_printk_skb: 27 callbacks suppressed
	[  +9.014676] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.527007] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.889041] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.746476] kauditd_printk_skb: 33 callbacks suppressed
	[Apr25 18:38] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.159948] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [ba098b391087ab69c154d60e93cdbca9709dae3e860e358078373ea832309cad] <==
	{"level":"warn","ts":"2024-04-25T18:34:11.105712Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.832937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11167"}
	{"level":"info","ts":"2024-04-25T18:34:11.106266Z","caller":"traceutil/trace.go:171","msg":"trace[1045415287] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:979; }","duration":"255.412898ms","start":"2024-04-25T18:34:10.850832Z","end":"2024-04-25T18:34:11.106245Z","steps":["trace[1045415287] 'agreement among raft nodes before linearized reading'  (duration: 254.695501ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:34:28.387147Z","caller":"traceutil/trace.go:171","msg":"trace[1455625901] transaction","detail":"{read_only:false; response_revision:1088; number_of_response:1; }","duration":"240.643015ms","start":"2024-04-25T18:34:28.14647Z","end":"2024-04-25T18:34:28.387113Z","steps":["trace[1455625901] 'process raft request'  (duration: 238.426866ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:34:54.908156Z","caller":"traceutil/trace.go:171","msg":"trace[730924886] linearizableReadLoop","detail":"{readStateIndex:1220; appliedIndex:1219; }","duration":"119.448648ms","start":"2024-04-25T18:34:54.788692Z","end":"2024-04-25T18:34:54.90814Z","steps":["trace[730924886] 'read index received'  (duration: 119.314554ms)","trace[730924886] 'applied index is now lower than readState.Index'  (duration: 133.684µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-25T18:34:54.908577Z","caller":"traceutil/trace.go:171","msg":"trace[41928026] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"297.813998ms","start":"2024-04-25T18:34:54.610751Z","end":"2024-04-25T18:34:54.908565Z","steps":["trace[41928026] 'process raft request'  (duration: 297.300753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:34:54.908891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.174024ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14358"}
	{"level":"info","ts":"2024-04-25T18:34:54.909619Z","caller":"traceutil/trace.go:171","msg":"trace[338300484] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1181; }","duration":"120.953396ms","start":"2024-04-25T18:34:54.788656Z","end":"2024-04-25T18:34:54.909609Z","steps":["trace[338300484] 'agreement among raft nodes before linearized reading'  (duration: 120.121509ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:35:44.017183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.975264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-25T18:35:44.017231Z","caller":"traceutil/trace.go:171","msg":"trace[131467759] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1271; }","duration":"202.044098ms","start":"2024-04-25T18:35:43.815175Z","end":"2024-04-25T18:35:44.017219Z","steps":["trace[131467759] 'range keys from in-memory index tree'  (duration: 201.904708ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:35:44.017245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.892089ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-04-25T18:35:44.017283Z","caller":"traceutil/trace.go:171","msg":"trace[390698173] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1271; }","duration":"172.066584ms","start":"2024-04-25T18:35:43.845208Z","end":"2024-04-25T18:35:44.017274Z","steps":["trace[390698173] 'range keys from in-memory index tree'  (duration: 171.780329ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:36:06.070473Z","caller":"traceutil/trace.go:171","msg":"trace[382770286] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1467; }","duration":"359.401107ms","start":"2024-04-25T18:36:05.711054Z","end":"2024-04-25T18:36:06.070455Z","steps":["trace[382770286] 'process raft request'  (duration: 359.123114ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:36:06.070718Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T18:36:05.711042Z","time spent":"359.494666ms","remote":"127.0.0.1:33370","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":46,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/controllers/kube-system/registry\" mod_revision:942 > success:<request_delete_range:<key:\"/registry/controllers/kube-system/registry\" > > failure:<request_range:<key:\"/registry/controllers/kube-system/registry\" > >"}
	{"level":"info","ts":"2024-04-25T18:36:06.071137Z","caller":"traceutil/trace.go:171","msg":"trace[835363111] linearizableReadLoop","detail":"{readStateIndex:1525; appliedIndex:1525; }","duration":"299.549111ms","start":"2024-04-25T18:36:05.771579Z","end":"2024-04-25T18:36:06.071128Z","steps":["trace[835363111] 'read index received'  (duration: 299.54581ms)","trace[835363111] 'applied index is now lower than readState.Index'  (duration: 2.799µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-25T18:36:06.071256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.669087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6768"}
	{"level":"info","ts":"2024-04-25T18:36:06.071275Z","caller":"traceutil/trace.go:171","msg":"trace[2128029028] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1467; }","duration":"299.69434ms","start":"2024-04-25T18:36:05.771575Z","end":"2024-04-25T18:36:06.071269Z","steps":["trace[2128029028] 'agreement among raft nodes before linearized reading'  (duration: 299.601287ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:36:06.083433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.764629ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6768"}
	{"level":"info","ts":"2024-04-25T18:36:06.083491Z","caller":"traceutil/trace.go:171","msg":"trace[1694448508] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1467; }","duration":"274.841639ms","start":"2024-04-25T18:36:05.80864Z","end":"2024-04-25T18:36:06.083481Z","steps":["trace[1694448508] 'agreement among raft nodes before linearized reading'  (duration: 274.733399ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:36:06.083612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.37004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-04-25T18:36:06.08363Z","caller":"traceutil/trace.go:171","msg":"trace[1572570004] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1467; }","duration":"286.405391ms","start":"2024-04-25T18:36:05.797217Z","end":"2024-04-25T18:36:06.083622Z","steps":["trace[1572570004] 'agreement among raft nodes before linearized reading'  (duration: 286.360002ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:36:06.083091Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.103288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-25T18:36:06.084062Z","caller":"traceutil/trace.go:171","msg":"trace[650421928] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1467; }","duration":"269.099163ms","start":"2024-04-25T18:36:05.814955Z","end":"2024-04-25T18:36:06.084055Z","steps":["trace[650421928] 'agreement among raft nodes before linearized reading'  (duration: 268.108802ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:36:18.858389Z","caller":"traceutil/trace.go:171","msg":"trace[471666342] transaction","detail":"{read_only:false; response_revision:1536; number_of_response:1; }","duration":"124.188714ms","start":"2024-04-25T18:36:18.734083Z","end":"2024-04-25T18:36:18.858272Z","steps":["trace[471666342] 'process raft request'  (duration: 123.755033ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:36:19.581717Z","caller":"traceutil/trace.go:171","msg":"trace[2130447041] transaction","detail":"{read_only:false; response_revision:1537; number_of_response:1; }","duration":"177.763667ms","start":"2024-04-25T18:36:19.403936Z","end":"2024-04-25T18:36:19.5817Z","steps":["trace[2130447041] 'process raft request'  (duration: 177.193836ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:36:25.532516Z","caller":"traceutil/trace.go:171","msg":"trace[1875476033] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1586; }","duration":"224.662333ms","start":"2024-04-25T18:36:25.307838Z","end":"2024-04-25T18:36:25.532501Z","steps":["trace[1875476033] 'process raft request'  (duration: 224.387177ms)"],"step_count":1}
	
	
	==> gcp-auth [444cf98d597b26ac307437fc04a6576f39c4ddc200c2eeb2e0444204f26594e7] <==
	2024/04/25 18:35:51 Ready to write response ...
	2024/04/25 18:35:51 Ready to marshal response ...
	2024/04/25 18:35:51 Ready to write response ...
	2024/04/25 18:35:51 Ready to marshal response ...
	2024/04/25 18:35:51 Ready to write response ...
	2024/04/25 18:35:56 Ready to marshal response ...
	2024/04/25 18:35:56 Ready to write response ...
	2024/04/25 18:35:58 Ready to marshal response ...
	2024/04/25 18:35:58 Ready to write response ...
	2024/04/25 18:36:00 Ready to marshal response ...
	2024/04/25 18:36:00 Ready to write response ...
	2024/04/25 18:36:00 Ready to marshal response ...
	2024/04/25 18:36:00 Ready to write response ...
	2024/04/25 18:36:00 Ready to marshal response ...
	2024/04/25 18:36:00 Ready to write response ...
	2024/04/25 18:36:04 Ready to marshal response ...
	2024/04/25 18:36:04 Ready to write response ...
	2024/04/25 18:36:14 Ready to marshal response ...
	2024/04/25 18:36:14 Ready to write response ...
	2024/04/25 18:36:26 Ready to marshal response ...
	2024/04/25 18:36:26 Ready to write response ...
	2024/04/25 18:36:31 Ready to marshal response ...
	2024/04/25 18:36:31 Ready to write response ...
	2024/04/25 18:38:47 Ready to marshal response ...
	2024/04/25 18:38:47 Ready to write response ...
	
	
	==> kernel <==
	 18:38:58 up 6 min,  0 users,  load average: 0.80, 1.38, 0.77
	Linux addons-477322 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e91ce0b80f86d5e85292f94da6f1cd5d7db205853dfcfe415aa0059ccb450f83] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0425 18:34:23.011567       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.31.115:443: connect: connection refused
	E0425 18:34:23.012108       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.31.115:443: connect: connection refused
	E0425 18:34:23.024482       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.31.115:443: connect: connection refused
	I0425 18:34:23.177114       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0425 18:35:56.164989       1 conn.go:339] Error on socket receive: read tcp 192.168.39.239:8443->192.168.39.1:34768: use of closed network connection
	I0425 18:36:00.434030       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.161.99"}
	I0425 18:36:20.353904       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0425 18:36:21.407691       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0425 18:36:26.150834       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0425 18:36:26.360862       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.5.208"}
	I0425 18:36:26.837956       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0425 18:36:30.389622       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0425 18:36:41.286072       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0425 18:36:48.080753       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0425 18:36:48.080849       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0425 18:36:48.139611       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0425 18:36:48.140163       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0425 18:36:48.203763       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0425 18:36:48.203970       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0425 18:36:48.253927       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	W0425 18:36:49.204466       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0425 18:36:49.256412       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0425 18:36:49.256529       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0425 18:38:47.828545       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.4.205"}
	
	
	==> kube-controller-manager [dcbbc3655cb9eee9c48e5c703032e6c66e0f3c1d8fe46c50b43c2e8e617986f7] <==
	E0425 18:37:33.059655       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:37:36.264568       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:37:36.264627       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:37:57.419146       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:37:57.419211       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:37:58.707135       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:37:58.707248       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:38:13.582960       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:38:13.583167       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:38:22.490057       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:38:22.490261       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:38:35.544115       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:38:35.544372       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0425 18:38:47.669107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="54.901611ms"
	I0425 18:38:47.693827       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="24.424777ms"
	I0425 18:38:47.694681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="159.449µs"
	I0425 18:38:47.694757       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="19.68µs"
	I0425 18:38:47.715647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="27.067µs"
	W0425 18:38:49.741010       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:38:49.741070       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0425 18:38:49.897407       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0425 18:38:49.909463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="10.901µs"
	I0425 18:38:49.920722       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0425 18:38:52.197858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="15.988211ms"
	I0425 18:38:52.198070       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="81.066µs"
	
	
	==> kube-proxy [e5d13c42367e56a88594713117ba450b13bde86d14fdd1911ed31bcae79c6255] <==
	I0425 18:33:07.631973       1 server_linux.go:69] "Using iptables proxy"
	I0425 18:33:07.658383       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.239"]
	I0425 18:33:07.741876       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 18:33:07.741984       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 18:33:07.742002       1 server_linux.go:165] "Using iptables Proxier"
	I0425 18:33:07.758580       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 18:33:07.758786       1 server.go:872] "Version info" version="v1.30.0"
	I0425 18:33:07.758798       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 18:33:07.768541       1 config.go:192] "Starting service config controller"
	I0425 18:33:07.768582       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 18:33:07.768606       1 config.go:101] "Starting endpoint slice config controller"
	I0425 18:33:07.768610       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 18:33:07.769016       1 config.go:319] "Starting node config controller"
	I0425 18:33:07.769059       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 18:33:07.868799       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 18:33:07.868880       1 shared_informer.go:320] Caches are synced for service config
	I0425 18:33:07.869154       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7c9db646bf6dbf0e9d7d21d563363f55428cc69781ff0b871042fc82cd43a56d] <==
	W0425 18:32:49.124061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 18:32:49.124100       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 18:32:49.124161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 18:32:49.124200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0425 18:32:49.124270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 18:32:49.124383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 18:32:49.126463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0425 18:32:49.126690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0425 18:32:49.940179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0425 18:32:49.940389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0425 18:32:50.003442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 18:32:50.003498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 18:32:50.085167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 18:32:50.085257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0425 18:32:50.260484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 18:32:50.260626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 18:32:50.312940       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 18:32:50.313017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 18:32:50.317034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 18:32:50.317086       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 18:32:50.380256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 18:32:50.380377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0425 18:32:50.655234       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 18:32:50.655415       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0425 18:32:52.303610       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 25 18:38:49 addons-477322 kubelet[1283]: I0425 18:38:49.005783    1283 scope.go:117] "RemoveContainer" containerID="e0acfd20c1c7ea44007557d4e5cd1215b9bbaad43f38c70e3fa5791a7a05d2bc"
	Apr 25 18:38:49 addons-477322 kubelet[1283]: I0425 18:38:49.037401    1283 scope.go:117] "RemoveContainer" containerID="e0acfd20c1c7ea44007557d4e5cd1215b9bbaad43f38c70e3fa5791a7a05d2bc"
	Apr 25 18:38:49 addons-477322 kubelet[1283]: E0425 18:38:49.042756    1283 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0acfd20c1c7ea44007557d4e5cd1215b9bbaad43f38c70e3fa5791a7a05d2bc\": container with ID starting with e0acfd20c1c7ea44007557d4e5cd1215b9bbaad43f38c70e3fa5791a7a05d2bc not found: ID does not exist" containerID="e0acfd20c1c7ea44007557d4e5cd1215b9bbaad43f38c70e3fa5791a7a05d2bc"
	Apr 25 18:38:49 addons-477322 kubelet[1283]: I0425 18:38:49.042941    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0acfd20c1c7ea44007557d4e5cd1215b9bbaad43f38c70e3fa5791a7a05d2bc"} err="failed to get container status \"e0acfd20c1c7ea44007557d4e5cd1215b9bbaad43f38c70e3fa5791a7a05d2bc\": rpc error: code = NotFound desc = could not find container \"e0acfd20c1c7ea44007557d4e5cd1215b9bbaad43f38c70e3fa5791a7a05d2bc\": container with ID starting with e0acfd20c1c7ea44007557d4e5cd1215b9bbaad43f38c70e3fa5791a7a05d2bc not found: ID does not exist"
	Apr 25 18:38:49 addons-477322 kubelet[1283]: I0425 18:38:49.583117    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2b29e86-902f-43bc-95db-5900cc3f5725" path="/var/lib/kubelet/pods/c2b29e86-902f-43bc-95db-5900cc3f5725/volumes"
	Apr 25 18:38:51 addons-477322 kubelet[1283]: I0425 18:38:51.572614    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1393ce-783f-494e-b919-e011c2a00d6d" path="/var/lib/kubelet/pods/ac1393ce-783f-494e-b919-e011c2a00d6d/volumes"
	Apr 25 18:38:51 addons-477322 kubelet[1283]: I0425 18:38:51.573066    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef29a9b5-e411-4156-8475-d0d498240422" path="/var/lib/kubelet/pods/ef29a9b5-e411-4156-8475-d0d498240422/volumes"
	Apr 25 18:38:51 addons-477322 kubelet[1283]: E0425 18:38:51.579862    1283 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:38:51 addons-477322 kubelet[1283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:38:51 addons-477322 kubelet[1283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:38:51 addons-477322 kubelet[1283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:38:51 addons-477322 kubelet[1283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 18:38:52 addons-477322 kubelet[1283]: I0425 18:38:52.237732    1283 scope.go:117] "RemoveContainer" containerID="943c8c998e9fa3145907e10dda6bae3b23e156dd9a0fb9c04fc82d79fb3b3ed2"
	Apr 25 18:38:52 addons-477322 kubelet[1283]: I0425 18:38:52.259288    1283 scope.go:117] "RemoveContainer" containerID="9ee67623b904b3306cf8a4a41b548deee0168a5c35b96b4022ed070c41805db6"
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.176158    1283 scope.go:117] "RemoveContainer" containerID="9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95"
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.196739    1283 scope.go:117] "RemoveContainer" containerID="9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95"
	Apr 25 18:38:53 addons-477322 kubelet[1283]: E0425 18:38:53.197261    1283 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95\": container with ID starting with 9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95 not found: ID does not exist" containerID="9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95"
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.197370    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95"} err="failed to get container status \"9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95\": rpc error: code = NotFound desc = could not find container \"9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95\": container with ID starting with 9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95 not found: ID does not exist"
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.312718    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc9858ea-8b29-48d6-9a91-d584980367d0-webhook-cert\") pod \"fc9858ea-8b29-48d6-9a91-d584980367d0\" (UID: \"fc9858ea-8b29-48d6-9a91-d584980367d0\") "
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.312767    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w46jc\" (UniqueName: \"kubernetes.io/projected/fc9858ea-8b29-48d6-9a91-d584980367d0-kube-api-access-w46jc\") pod \"fc9858ea-8b29-48d6-9a91-d584980367d0\" (UID: \"fc9858ea-8b29-48d6-9a91-d584980367d0\") "
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.317614    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9858ea-8b29-48d6-9a91-d584980367d0-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "fc9858ea-8b29-48d6-9a91-d584980367d0" (UID: "fc9858ea-8b29-48d6-9a91-d584980367d0"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.319484    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9858ea-8b29-48d6-9a91-d584980367d0-kube-api-access-w46jc" (OuterVolumeSpecName: "kube-api-access-w46jc") pod "fc9858ea-8b29-48d6-9a91-d584980367d0" (UID: "fc9858ea-8b29-48d6-9a91-d584980367d0"). InnerVolumeSpecName "kube-api-access-w46jc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.413591    1283 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w46jc\" (UniqueName: \"kubernetes.io/projected/fc9858ea-8b29-48d6-9a91-d584980367d0-kube-api-access-w46jc\") on node \"addons-477322\" DevicePath \"\""
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.413624    1283 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc9858ea-8b29-48d6-9a91-d584980367d0-webhook-cert\") on node \"addons-477322\" DevicePath \"\""
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.567056    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9858ea-8b29-48d6-9a91-d584980367d0" path="/var/lib/kubelet/pods/fc9858ea-8b29-48d6-9a91-d584980367d0/volumes"
	
	
	==> storage-provisioner [8a581b2bef974518ff15839d7127b97175c6ca2c11630a8877145f8e707dacfa] <==
	I0425 18:33:16.266399       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0425 18:33:16.293200       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0425 18:33:16.297471       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0425 18:33:16.365257       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0425 18:33:16.376924       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f4bdbf17-ef67-4f87-b6e9-7a526d889302", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-477322_55436776-d203-4ba2-8edd-415dd7c1f311 became leader
	I0425 18:33:16.379556       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-477322_55436776-d203-4ba2-8edd-415dd7c1f311!
	I0425 18:33:16.484580       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-477322_55436776-d203-4ba2-8edd-415dd7c1f311!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-477322 -n addons-477322
helpers_test.go:261: (dbg) Run:  kubectl --context addons-477322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (339.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.482318ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-bw7rc" [5e6ef0c9-2d28-429e-a92f-7bb24314635d] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006327611s
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (99.040274ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 3m6.538756409s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (68.443467ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 3m10.947853467s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (70.231657ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 3m16.14210591s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (76.56375ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 3m21.382530996s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (92.006669ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 3m36.322928025s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (62.871558ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 3m52.090875059s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (65.528818ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 4m17.463248368s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (61.244558ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 4m46.198770519s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (63.828213ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 5m35.368427372s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (64.514678ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 6m40.230991317s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (66.17755ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 7m15.329480623s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-477322 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-477322 top pods -n kube-system: exit status 1 (62.353179ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-6wpfr, age: 8m38.21969302s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-477322 -n addons-477322
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-477322 logs -n 25: (1.528150273s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:32 UTC |
	| delete  | -p download-only-019320                                                                     | download-only-019320 | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:32 UTC |
	| delete  | -p download-only-587952                                                                     | download-only-587952 | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:32 UTC |
	| delete  | -p download-only-019320                                                                     | download-only-019320 | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-815806 | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC |                     |
	|         | binary-mirror-815806                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42043                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-815806                                                                     | binary-mirror-815806 | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC |                     |
	|         | addons-477322                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC |                     |
	|         | addons-477322                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-477322 --wait=true                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:32 UTC | 25 Apr 24 18:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:35 UTC | 25 Apr 24 18:35 UTC |
	|         | -p addons-477322                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:35 UTC | 25 Apr 24 18:35 UTC |
	|         | addons-477322                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:35 UTC | 25 Apr 24 18:36 UTC |
	|         | -p addons-477322                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-477322 addons disable                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-477322 ip                                                                            | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	| addons  | addons-477322 addons disable                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-477322 ssh cat                                                                       | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | /opt/local-path-provisioner/pvc-c6aa81f4-fb5f-4681-a571-2703b02db912_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-477322 addons disable                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | addons-477322                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-477322 ssh curl -s                                                                   | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-477322 addons                                                                        | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-477322 addons                                                                        | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:36 UTC | 25 Apr 24 18:36 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-477322 ip                                                                            | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:38 UTC | 25 Apr 24 18:38 UTC |
	| addons  | addons-477322 addons disable                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:38 UTC | 25 Apr 24 18:38 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-477322 addons disable                                                                | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:38 UTC | 25 Apr 24 18:38 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-477322 addons                                                                        | addons-477322        | jenkins | v1.33.0 | 25 Apr 24 18:41 UTC | 25 Apr 24 18:41 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 18:32:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 18:32:08.876791   14407 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:32:08.876916   14407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:32:08.876925   14407 out.go:304] Setting ErrFile to fd 2...
	I0425 18:32:08.876930   14407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:32:08.877114   14407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:32:08.877755   14407 out.go:298] Setting JSON to false
	I0425 18:32:08.878614   14407 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":875,"bootTime":1714069054,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 18:32:08.878675   14407 start.go:139] virtualization: kvm guest
	I0425 18:32:08.880727   14407 out.go:177] * [addons-477322] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 18:32:08.882584   14407 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 18:32:08.883998   14407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 18:32:08.882607   14407 notify.go:220] Checking for updates...
	I0425 18:32:08.886576   14407 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:32:08.888028   14407 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:32:08.889490   14407 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 18:32:08.890830   14407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 18:32:08.892174   14407 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 18:32:08.922804   14407 out.go:177] * Using the kvm2 driver based on user configuration
	I0425 18:32:08.924096   14407 start.go:297] selected driver: kvm2
	I0425 18:32:08.924122   14407 start.go:901] validating driver "kvm2" against <nil>
	I0425 18:32:08.924135   14407 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 18:32:08.924812   14407 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:32:08.924891   14407 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 18:32:08.938794   14407 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 18:32:08.938846   14407 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 18:32:08.939031   14407 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 18:32:08.939079   14407 cni.go:84] Creating CNI manager for ""
	I0425 18:32:08.939091   14407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 18:32:08.939099   14407 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 18:32:08.939141   14407 start.go:340] cluster config:
	{Name:addons-477322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-477322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:32:08.939229   14407 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:32:08.940844   14407 out.go:177] * Starting "addons-477322" primary control-plane node in "addons-477322" cluster
	I0425 18:32:08.942146   14407 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:32:08.942182   14407 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 18:32:08.942192   14407 cache.go:56] Caching tarball of preloaded images
	I0425 18:32:08.942275   14407 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 18:32:08.942287   14407 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 18:32:08.942574   14407 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/config.json ...
	I0425 18:32:08.942593   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/config.json: {Name:mkfbbe8b32ad34fd727afe9be4baba9b3add5b51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:08.942715   14407 start.go:360] acquireMachinesLock for addons-477322: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 18:32:08.942757   14407 start.go:364] duration metric: took 29.658µs to acquireMachinesLock for "addons-477322"
	I0425 18:32:08.942773   14407 start.go:93] Provisioning new machine with config: &{Name:addons-477322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-477322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:32:08.942828   14407 start.go:125] createHost starting for "" (driver="kvm2")
	I0425 18:32:08.944458   14407 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0425 18:32:08.944599   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:32:08.944635   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:32:08.958849   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I0425 18:32:08.959520   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:32:08.960319   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:32:08.960341   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:32:08.960865   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:32:08.961094   14407 main.go:141] libmachine: (addons-477322) Calling .GetMachineName
	I0425 18:32:08.961247   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:08.961390   14407 start.go:159] libmachine.API.Create for "addons-477322" (driver="kvm2")
	I0425 18:32:08.961425   14407 client.go:168] LocalClient.Create starting
	I0425 18:32:08.961471   14407 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 18:32:09.117809   14407 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 18:32:09.209933   14407 main.go:141] libmachine: Running pre-create checks...
	I0425 18:32:09.209955   14407 main.go:141] libmachine: (addons-477322) Calling .PreCreateCheck
	I0425 18:32:09.210415   14407 main.go:141] libmachine: (addons-477322) Calling .GetConfigRaw
	I0425 18:32:09.210866   14407 main.go:141] libmachine: Creating machine...
	I0425 18:32:09.210882   14407 main.go:141] libmachine: (addons-477322) Calling .Create
	I0425 18:32:09.210996   14407 main.go:141] libmachine: (addons-477322) Creating KVM machine...
	I0425 18:32:09.212276   14407 main.go:141] libmachine: (addons-477322) DBG | found existing default KVM network
	I0425 18:32:09.212956   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:09.212840   14429 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0425 18:32:09.212986   14407 main.go:141] libmachine: (addons-477322) DBG | created network xml: 
	I0425 18:32:09.213012   14407 main.go:141] libmachine: (addons-477322) DBG | <network>
	I0425 18:32:09.213026   14407 main.go:141] libmachine: (addons-477322) DBG |   <name>mk-addons-477322</name>
	I0425 18:32:09.213037   14407 main.go:141] libmachine: (addons-477322) DBG |   <dns enable='no'/>
	I0425 18:32:09.213046   14407 main.go:141] libmachine: (addons-477322) DBG |   
	I0425 18:32:09.213061   14407 main.go:141] libmachine: (addons-477322) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0425 18:32:09.213067   14407 main.go:141] libmachine: (addons-477322) DBG |     <dhcp>
	I0425 18:32:09.213072   14407 main.go:141] libmachine: (addons-477322) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0425 18:32:09.213077   14407 main.go:141] libmachine: (addons-477322) DBG |     </dhcp>
	I0425 18:32:09.213084   14407 main.go:141] libmachine: (addons-477322) DBG |   </ip>
	I0425 18:32:09.213089   14407 main.go:141] libmachine: (addons-477322) DBG |   
	I0425 18:32:09.213095   14407 main.go:141] libmachine: (addons-477322) DBG | </network>
	I0425 18:32:09.213104   14407 main.go:141] libmachine: (addons-477322) DBG | 
	I0425 18:32:09.218454   14407 main.go:141] libmachine: (addons-477322) DBG | trying to create private KVM network mk-addons-477322 192.168.39.0/24...
	I0425 18:32:09.280699   14407 main.go:141] libmachine: (addons-477322) DBG | private KVM network mk-addons-477322 192.168.39.0/24 created
	I0425 18:32:09.280761   14407 main.go:141] libmachine: (addons-477322) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322 ...
	I0425 18:32:09.280786   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:09.280643   14429 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:32:09.280815   14407 main.go:141] libmachine: (addons-477322) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 18:32:09.280840   14407 main.go:141] libmachine: (addons-477322) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 18:32:09.527664   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:09.527510   14429 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa...
	I0425 18:32:09.671854   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:09.671728   14429 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/addons-477322.rawdisk...
	I0425 18:32:09.671880   14407 main.go:141] libmachine: (addons-477322) DBG | Writing magic tar header
	I0425 18:32:09.671890   14407 main.go:141] libmachine: (addons-477322) DBG | Writing SSH key tar header
	I0425 18:32:09.671900   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:09.671863   14429 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322 ...
	I0425 18:32:09.671977   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322
	I0425 18:32:09.672001   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322 (perms=drwx------)
	I0425 18:32:09.672011   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 18:32:09.672024   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:32:09.672034   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 18:32:09.672048   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 18:32:09.672056   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home/jenkins
	I0425 18:32:09.672068   14407 main.go:141] libmachine: (addons-477322) DBG | Checking permissions on dir: /home
	I0425 18:32:09.672078   14407 main.go:141] libmachine: (addons-477322) DBG | Skipping /home - not owner
	I0425 18:32:09.672091   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 18:32:09.672108   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 18:32:09.672122   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 18:32:09.672138   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 18:32:09.672151   14407 main.go:141] libmachine: (addons-477322) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 18:32:09.672165   14407 main.go:141] libmachine: (addons-477322) Creating domain...
	I0425 18:32:09.673538   14407 main.go:141] libmachine: (addons-477322) define libvirt domain using xml: 
	I0425 18:32:09.673582   14407 main.go:141] libmachine: (addons-477322) <domain type='kvm'>
	I0425 18:32:09.673598   14407 main.go:141] libmachine: (addons-477322)   <name>addons-477322</name>
	I0425 18:32:09.673614   14407 main.go:141] libmachine: (addons-477322)   <memory unit='MiB'>4000</memory>
	I0425 18:32:09.673625   14407 main.go:141] libmachine: (addons-477322)   <vcpu>2</vcpu>
	I0425 18:32:09.673639   14407 main.go:141] libmachine: (addons-477322)   <features>
	I0425 18:32:09.673653   14407 main.go:141] libmachine: (addons-477322)     <acpi/>
	I0425 18:32:09.673666   14407 main.go:141] libmachine: (addons-477322)     <apic/>
	I0425 18:32:09.673678   14407 main.go:141] libmachine: (addons-477322)     <pae/>
	I0425 18:32:09.673688   14407 main.go:141] libmachine: (addons-477322)     
	I0425 18:32:09.673697   14407 main.go:141] libmachine: (addons-477322)   </features>
	I0425 18:32:09.673708   14407 main.go:141] libmachine: (addons-477322)   <cpu mode='host-passthrough'>
	I0425 18:32:09.673716   14407 main.go:141] libmachine: (addons-477322)   
	I0425 18:32:09.673728   14407 main.go:141] libmachine: (addons-477322)   </cpu>
	I0425 18:32:09.673740   14407 main.go:141] libmachine: (addons-477322)   <os>
	I0425 18:32:09.673753   14407 main.go:141] libmachine: (addons-477322)     <type>hvm</type>
	I0425 18:32:09.673763   14407 main.go:141] libmachine: (addons-477322)     <boot dev='cdrom'/>
	I0425 18:32:09.673771   14407 main.go:141] libmachine: (addons-477322)     <boot dev='hd'/>
	I0425 18:32:09.673784   14407 main.go:141] libmachine: (addons-477322)     <bootmenu enable='no'/>
	I0425 18:32:09.673794   14407 main.go:141] libmachine: (addons-477322)   </os>
	I0425 18:32:09.673803   14407 main.go:141] libmachine: (addons-477322)   <devices>
	I0425 18:32:09.673814   14407 main.go:141] libmachine: (addons-477322)     <disk type='file' device='cdrom'>
	I0425 18:32:09.673832   14407 main.go:141] libmachine: (addons-477322)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/boot2docker.iso'/>
	I0425 18:32:09.673847   14407 main.go:141] libmachine: (addons-477322)       <target dev='hdc' bus='scsi'/>
	I0425 18:32:09.673859   14407 main.go:141] libmachine: (addons-477322)       <readonly/>
	I0425 18:32:09.673870   14407 main.go:141] libmachine: (addons-477322)     </disk>
	I0425 18:32:09.673880   14407 main.go:141] libmachine: (addons-477322)     <disk type='file' device='disk'>
	I0425 18:32:09.673894   14407 main.go:141] libmachine: (addons-477322)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 18:32:09.673911   14407 main.go:141] libmachine: (addons-477322)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/addons-477322.rawdisk'/>
	I0425 18:32:09.673926   14407 main.go:141] libmachine: (addons-477322)       <target dev='hda' bus='virtio'/>
	I0425 18:32:09.673938   14407 main.go:141] libmachine: (addons-477322)     </disk>
	I0425 18:32:09.673948   14407 main.go:141] libmachine: (addons-477322)     <interface type='network'>
	I0425 18:32:09.673960   14407 main.go:141] libmachine: (addons-477322)       <source network='mk-addons-477322'/>
	I0425 18:32:09.673970   14407 main.go:141] libmachine: (addons-477322)       <model type='virtio'/>
	I0425 18:32:09.673978   14407 main.go:141] libmachine: (addons-477322)     </interface>
	I0425 18:32:09.673989   14407 main.go:141] libmachine: (addons-477322)     <interface type='network'>
	I0425 18:32:09.674014   14407 main.go:141] libmachine: (addons-477322)       <source network='default'/>
	I0425 18:32:09.674049   14407 main.go:141] libmachine: (addons-477322)       <model type='virtio'/>
	I0425 18:32:09.674061   14407 main.go:141] libmachine: (addons-477322)     </interface>
	I0425 18:32:09.674079   14407 main.go:141] libmachine: (addons-477322)     <serial type='pty'>
	I0425 18:32:09.674092   14407 main.go:141] libmachine: (addons-477322)       <target port='0'/>
	I0425 18:32:09.674098   14407 main.go:141] libmachine: (addons-477322)     </serial>
	I0425 18:32:09.674109   14407 main.go:141] libmachine: (addons-477322)     <console type='pty'>
	I0425 18:32:09.674184   14407 main.go:141] libmachine: (addons-477322)       <target type='serial' port='0'/>
	I0425 18:32:09.674220   14407 main.go:141] libmachine: (addons-477322)     </console>
	I0425 18:32:09.674238   14407 main.go:141] libmachine: (addons-477322)     <rng model='virtio'>
	I0425 18:32:09.674255   14407 main.go:141] libmachine: (addons-477322)       <backend model='random'>/dev/random</backend>
	I0425 18:32:09.674265   14407 main.go:141] libmachine: (addons-477322)     </rng>
	I0425 18:32:09.674276   14407 main.go:141] libmachine: (addons-477322)     
	I0425 18:32:09.674286   14407 main.go:141] libmachine: (addons-477322)     
	I0425 18:32:09.674297   14407 main.go:141] libmachine: (addons-477322)   </devices>
	I0425 18:32:09.674307   14407 main.go:141] libmachine: (addons-477322) </domain>
	I0425 18:32:09.674333   14407 main.go:141] libmachine: (addons-477322) 
	I0425 18:32:09.679799   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:77:04:7e in network default
	I0425 18:32:09.680248   14407 main.go:141] libmachine: (addons-477322) Ensuring networks are active...
	I0425 18:32:09.680274   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:09.680901   14407 main.go:141] libmachine: (addons-477322) Ensuring network default is active
	I0425 18:32:09.681216   14407 main.go:141] libmachine: (addons-477322) Ensuring network mk-addons-477322 is active
	I0425 18:32:09.681628   14407 main.go:141] libmachine: (addons-477322) Getting domain xml...
	I0425 18:32:09.682440   14407 main.go:141] libmachine: (addons-477322) Creating domain...
	I0425 18:32:10.898121   14407 main.go:141] libmachine: (addons-477322) Waiting to get IP...
	I0425 18:32:10.898847   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:10.899233   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:10.899302   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:10.899220   14429 retry.go:31] will retry after 239.217748ms: waiting for machine to come up
	I0425 18:32:11.141056   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:11.141494   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:11.141517   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:11.141450   14429 retry.go:31] will retry after 270.176347ms: waiting for machine to come up
	I0425 18:32:11.412761   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:11.413161   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:11.413186   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:11.413113   14429 retry.go:31] will retry after 415.08956ms: waiting for machine to come up
	I0425 18:32:11.829611   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:11.830033   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:11.830062   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:11.829983   14429 retry.go:31] will retry after 464.643201ms: waiting for machine to come up
	I0425 18:32:12.296753   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:12.297076   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:12.297114   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:12.297027   14429 retry.go:31] will retry after 651.866009ms: waiting for machine to come up
	I0425 18:32:12.950911   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:12.951360   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:12.951381   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:12.951318   14429 retry.go:31] will retry after 661.025369ms: waiting for machine to come up
	I0425 18:32:13.614414   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:13.614858   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:13.614882   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:13.614817   14429 retry.go:31] will retry after 888.586656ms: waiting for machine to come up
	I0425 18:32:14.504593   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:14.504996   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:14.505026   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:14.504943   14429 retry.go:31] will retry after 1.452665926s: waiting for machine to come up
	I0425 18:32:15.959193   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:15.959653   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:15.959683   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:15.959621   14429 retry.go:31] will retry after 1.255402186s: waiting for machine to come up
	I0425 18:32:17.216960   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:17.217371   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:17.217390   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:17.217356   14429 retry.go:31] will retry after 2.037520865s: waiting for machine to come up
	I0425 18:32:19.257013   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:19.257421   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:19.257449   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:19.257380   14429 retry.go:31] will retry after 2.037152484s: waiting for machine to come up
	I0425 18:32:21.297654   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:21.298244   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:21.298276   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:21.298160   14429 retry.go:31] will retry after 2.608621662s: waiting for machine to come up
	I0425 18:32:23.909824   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:23.910314   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:23.910342   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:23.910255   14429 retry.go:31] will retry after 3.706941744s: waiting for machine to come up
	I0425 18:32:27.621440   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:27.621850   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find current IP address of domain addons-477322 in network mk-addons-477322
	I0425 18:32:27.621879   14407 main.go:141] libmachine: (addons-477322) DBG | I0425 18:32:27.621818   14429 retry.go:31] will retry after 4.669046243s: waiting for machine to come up
	I0425 18:32:32.294454   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.294799   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has current primary IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.294816   14407 main.go:141] libmachine: (addons-477322) Found IP for machine: 192.168.39.239
	I0425 18:32:32.294838   14407 main.go:141] libmachine: (addons-477322) Reserving static IP address...
	I0425 18:32:32.295131   14407 main.go:141] libmachine: (addons-477322) DBG | unable to find host DHCP lease matching {name: "addons-477322", mac: "52:54:00:d2:55:42", ip: "192.168.39.239"} in network mk-addons-477322
	I0425 18:32:32.368610   14407 main.go:141] libmachine: (addons-477322) DBG | Getting to WaitForSSH function...
	I0425 18:32:32.368642   14407 main.go:141] libmachine: (addons-477322) Reserved static IP address: 192.168.39.239
	I0425 18:32:32.368655   14407 main.go:141] libmachine: (addons-477322) Waiting for SSH to be available...
	I0425 18:32:32.371205   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.371639   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.371677   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.371751   14407 main.go:141] libmachine: (addons-477322) DBG | Using SSH client type: external
	I0425 18:32:32.371795   14407 main.go:141] libmachine: (addons-477322) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa (-rw-------)
	I0425 18:32:32.371842   14407 main.go:141] libmachine: (addons-477322) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 18:32:32.371858   14407 main.go:141] libmachine: (addons-477322) DBG | About to run SSH command:
	I0425 18:32:32.371870   14407 main.go:141] libmachine: (addons-477322) DBG | exit 0
	I0425 18:32:32.506510   14407 main.go:141] libmachine: (addons-477322) DBG | SSH cmd err, output: <nil>: 
	I0425 18:32:32.506738   14407 main.go:141] libmachine: (addons-477322) KVM machine creation complete!
	I0425 18:32:32.507077   14407 main.go:141] libmachine: (addons-477322) Calling .GetConfigRaw
	I0425 18:32:32.507667   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:32.507945   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:32.508188   14407 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 18:32:32.508209   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:32:32.509461   14407 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 18:32:32.509477   14407 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 18:32:32.509484   14407 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 18:32:32.509490   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:32.511597   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.511940   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.511975   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.512082   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:32.512257   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.512402   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.512532   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:32.512647   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:32.512847   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:32.512863   14407 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 18:32:32.621804   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:32:32.621830   14407 main.go:141] libmachine: Detecting the provisioner...
	I0425 18:32:32.621838   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:32.624593   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.624922   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.624945   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.625076   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:32.625259   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.625441   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.625554   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:32.625739   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:32.625941   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:32.625957   14407 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 18:32:32.735728   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 18:32:32.735778   14407 main.go:141] libmachine: found compatible host: buildroot
	I0425 18:32:32.735785   14407 main.go:141] libmachine: Provisioning with buildroot...
	I0425 18:32:32.735792   14407 main.go:141] libmachine: (addons-477322) Calling .GetMachineName
	I0425 18:32:32.736059   14407 buildroot.go:166] provisioning hostname "addons-477322"
	I0425 18:32:32.736084   14407 main.go:141] libmachine: (addons-477322) Calling .GetMachineName
	I0425 18:32:32.736247   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:32.738736   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.739088   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.739117   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.739217   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:32.739398   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.739566   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.739707   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:32.739871   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:32.740024   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:32.740042   14407 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-477322 && echo "addons-477322" | sudo tee /etc/hostname
	I0425 18:32:32.866791   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-477322
	
	I0425 18:32:32.866826   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:32.869256   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.869620   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.869648   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.869766   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:32.869943   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.870081   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:32.870261   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:32.870461   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:32.870619   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:32.870634   14407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-477322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-477322/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-477322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 18:32:32.988831   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:32:32.988865   14407 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 18:32:32.988910   14407 buildroot.go:174] setting up certificates
	I0425 18:32:32.988928   14407 provision.go:84] configureAuth start
	I0425 18:32:32.988940   14407 main.go:141] libmachine: (addons-477322) Calling .GetMachineName
	I0425 18:32:32.989194   14407 main.go:141] libmachine: (addons-477322) Calling .GetIP
	I0425 18:32:32.991753   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.992075   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.992111   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.992323   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:32.994416   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.994676   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:32.994702   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:32.994844   14407 provision.go:143] copyHostCerts
	I0425 18:32:32.994901   14407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 18:32:32.995021   14407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 18:32:32.995090   14407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 18:32:32.995152   14407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.addons-477322 san=[127.0.0.1 192.168.39.239 addons-477322 localhost minikube]
	I0425 18:32:33.115468   14407 provision.go:177] copyRemoteCerts
	I0425 18:32:33.115524   14407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 18:32:33.115548   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.118254   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.118570   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.118599   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.118774   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.118943   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.119086   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.119208   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:32:33.205234   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 18:32:33.232868   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0425 18:32:33.261346   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 18:32:33.289923   14407 provision.go:87] duration metric: took 300.978659ms to configureAuth
	I0425 18:32:33.289951   14407 buildroot.go:189] setting minikube options for container-runtime
	I0425 18:32:33.290149   14407 config.go:182] Loaded profile config "addons-477322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:32:33.290270   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.292926   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.293244   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.293269   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.293541   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.293733   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.293896   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.294030   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.294183   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:33.294406   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:33.294443   14407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 18:32:33.580602   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 18:32:33.580664   14407 main.go:141] libmachine: Checking connection to Docker...
	I0425 18:32:33.580678   14407 main.go:141] libmachine: (addons-477322) Calling .GetURL
	I0425 18:32:33.581931   14407 main.go:141] libmachine: (addons-477322) DBG | Using libvirt version 6000000
	I0425 18:32:33.583813   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.584146   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.584174   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.584298   14407 main.go:141] libmachine: Docker is up and running!
	I0425 18:32:33.584317   14407 main.go:141] libmachine: Reticulating splines...
	I0425 18:32:33.584323   14407 client.go:171] duration metric: took 24.622887723s to LocalClient.Create
	I0425 18:32:33.584342   14407 start.go:167] duration metric: took 24.622953174s to libmachine.API.Create "addons-477322"
	I0425 18:32:33.584359   14407 start.go:293] postStartSetup for "addons-477322" (driver="kvm2")
	I0425 18:32:33.584371   14407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 18:32:33.584386   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:33.584619   14407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 18:32:33.584639   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.586625   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.586988   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.587016   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.587161   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.587339   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.587505   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.587639   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:32:33.674561   14407 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 18:32:33.679904   14407 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 18:32:33.679929   14407 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 18:32:33.680000   14407 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 18:32:33.680023   14407 start.go:296] duration metric: took 95.655998ms for postStartSetup
	I0425 18:32:33.680054   14407 main.go:141] libmachine: (addons-477322) Calling .GetConfigRaw
	I0425 18:32:33.680562   14407 main.go:141] libmachine: (addons-477322) Calling .GetIP
	I0425 18:32:33.683312   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.683618   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.683653   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.683858   14407 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/config.json ...
	I0425 18:32:33.684047   14407 start.go:128] duration metric: took 24.741208165s to createHost
	I0425 18:32:33.684072   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.686236   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.686509   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.686545   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.686676   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.686846   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.686997   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.687131   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.687303   14407 main.go:141] libmachine: Using SSH client type: native
	I0425 18:32:33.687505   14407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0425 18:32:33.687521   14407 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 18:32:33.799852   14407 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714069953.768233876
	
	I0425 18:32:33.799880   14407 fix.go:216] guest clock: 1714069953.768233876
	I0425 18:32:33.799887   14407 fix.go:229] Guest: 2024-04-25 18:32:33.768233876 +0000 UTC Remote: 2024-04-25 18:32:33.684060353 +0000 UTC m=+24.852639538 (delta=84.173523ms)
	I0425 18:32:33.799908   14407 fix.go:200] guest clock delta is within tolerance: 84.173523ms
	I0425 18:32:33.799913   14407 start.go:83] releasing machines lock for "addons-477322", held for 24.857147086s
	I0425 18:32:33.799932   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:33.800179   14407 main.go:141] libmachine: (addons-477322) Calling .GetIP
	I0425 18:32:33.802972   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.803469   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.803503   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.803645   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:33.804228   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:33.804401   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:32:33.804506   14407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 18:32:33.804550   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.804714   14407 ssh_runner.go:195] Run: cat /version.json
	I0425 18:32:33.804741   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:32:33.807620   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.807651   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.807972   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.807994   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.808033   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:33.808057   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:33.808170   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.808325   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:32:33.808404   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.808476   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:32:33.808537   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.808597   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:32:33.808705   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:32:33.808766   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:32:33.917578   14407 ssh_runner.go:195] Run: systemctl --version
	I0425 18:32:33.924711   14407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 18:32:34.093470   14407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 18:32:34.100098   14407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 18:32:34.100158   14407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 18:32:34.120554   14407 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 18:32:34.120613   14407 start.go:494] detecting cgroup driver to use...
	I0425 18:32:34.120673   14407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 18:32:34.139252   14407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 18:32:34.156176   14407 docker.go:217] disabling cri-docker service (if available) ...
	I0425 18:32:34.156229   14407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 18:32:34.172074   14407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 18:32:34.188818   14407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 18:32:34.321077   14407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 18:32:34.465908   14407 docker.go:233] disabling docker service ...
	I0425 18:32:34.465979   14407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 18:32:34.482982   14407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 18:32:34.497440   14407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 18:32:34.631854   14407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 18:32:34.780095   14407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 18:32:34.796352   14407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 18:32:34.818121   14407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 18:32:34.818216   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.831309   14407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 18:32:34.831388   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.844734   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.857818   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.871032   14407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 18:32:34.884226   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.897118   14407 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.920153   14407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:32:34.935413   14407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 18:32:34.949466   14407 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 18:32:34.949523   14407 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 18:32:34.968446   14407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 18:32:34.982842   14407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:32:35.129070   14407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 18:32:35.289579   14407 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 18:32:35.289707   14407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 18:32:35.295185   14407 start.go:562] Will wait 60s for crictl version
	I0425 18:32:35.295261   14407 ssh_runner.go:195] Run: which crictl
	I0425 18:32:35.299565   14407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 18:32:35.341431   14407 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 18:32:35.341570   14407 ssh_runner.go:195] Run: crio --version
	I0425 18:32:35.376321   14407 ssh_runner.go:195] Run: crio --version
	I0425 18:32:35.409404   14407 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 18:32:35.410955   14407 main.go:141] libmachine: (addons-477322) Calling .GetIP
	I0425 18:32:35.413805   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:35.414177   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:32:35.414237   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:32:35.414445   14407 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 18:32:35.419492   14407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:32:35.435405   14407 kubeadm.go:877] updating cluster {Name:addons-477322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-477322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 18:32:35.435507   14407 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:32:35.435548   14407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 18:32:35.472112   14407 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 18:32:35.472171   14407 ssh_runner.go:195] Run: which lz4
	I0425 18:32:35.476932   14407 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 18:32:35.481833   14407 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 18:32:35.481871   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 18:32:37.093365   14407 crio.go:462] duration metric: took 1.616455772s to copy over tarball
	I0425 18:32:37.093432   14407 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 18:32:39.682702   14407 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.589245174s)
	I0425 18:32:39.682732   14407 crio.go:469] duration metric: took 2.589338983s to extract the tarball
	I0425 18:32:39.682741   14407 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 18:32:39.722944   14407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 18:32:39.774424   14407 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 18:32:39.774454   14407 cache_images.go:84] Images are preloaded, skipping loading
	I0425 18:32:39.774464   14407 kubeadm.go:928] updating node { 192.168.39.239 8443 v1.30.0 crio true true} ...
	I0425 18:32:39.774604   14407 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-477322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-477322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 18:32:39.774697   14407 ssh_runner.go:195] Run: crio config
	I0425 18:32:39.827319   14407 cni.go:84] Creating CNI manager for ""
	I0425 18:32:39.827351   14407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 18:32:39.827365   14407 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 18:32:39.827386   14407 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-477322 NodeName:addons-477322 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 18:32:39.827564   14407 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-477322"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 18:32:39.827622   14407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 18:32:39.839343   14407 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 18:32:39.839406   14407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 18:32:39.850676   14407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0425 18:32:39.869798   14407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 18:32:39.889261   14407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0425 18:32:39.908921   14407 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I0425 18:32:39.913508   14407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:32:39.928631   14407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:32:40.062192   14407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:32:40.081068   14407 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322 for IP: 192.168.39.239
	I0425 18:32:40.081097   14407 certs.go:194] generating shared ca certs ...
	I0425 18:32:40.081119   14407 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.081284   14407 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 18:32:40.209056   14407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt ...
	I0425 18:32:40.209093   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt: {Name:mk3887859f354ed896fbae7c34bd1bc1db634b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.209270   14407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key ...
	I0425 18:32:40.209281   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key: {Name:mk71370329172ea9afcee9545022ae144932d1fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.209348   14407 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 18:32:40.308956   14407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt ...
	I0425 18:32:40.308984   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt: {Name:mkee8afd19c42bdc2e5f359d8aa6358fc627dcf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.309127   14407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key ...
	I0425 18:32:40.309138   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key: {Name:mke29a388a15f2bd08a1ab201764d3be8a3cef3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.309206   14407 certs.go:256] generating profile certs ...
	I0425 18:32:40.309263   14407 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.key
	I0425 18:32:40.309281   14407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt with IP's: []
	I0425 18:32:40.526590   14407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt ...
	I0425 18:32:40.526618   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: {Name:mkc0d2285ce92926517408da9b07c1b07342b6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.526769   14407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.key ...
	I0425 18:32:40.526779   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.key: {Name:mkf4f9d0102869b03358296e519a19d8577237bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.526843   14407 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key.561cdee7
	I0425 18:32:40.526859   14407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt.561cdee7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.239]
	I0425 18:32:40.675176   14407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt.561cdee7 ...
	I0425 18:32:40.675215   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt.561cdee7: {Name:mk7c67658c25dbae2b93ea93af92c48b425280c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.675377   14407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key.561cdee7 ...
	I0425 18:32:40.675390   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key.561cdee7: {Name:mkaa0d20d78bbf1d529d2d6afabe7b4b38456c4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.675458   14407 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt.561cdee7 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt
	I0425 18:32:40.675555   14407 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key.561cdee7 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key
	I0425 18:32:40.675603   14407 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.key
	I0425 18:32:40.675621   14407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.crt with IP's: []
	I0425 18:32:40.747246   14407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.crt ...
	I0425 18:32:40.747273   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.crt: {Name:mk022faf332c3fc64969534ed737054decdc5298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.747420   14407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.key ...
	I0425 18:32:40.747430   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.key: {Name:mk7bfbfe15f0850685a4c3880da12e0453dd03f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:40.747597   14407 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 18:32:40.747631   14407 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 18:32:40.747659   14407 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 18:32:40.747688   14407 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 18:32:40.748269   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 18:32:40.806470   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 18:32:40.843230   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 18:32:40.874273   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 18:32:40.901449   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0425 18:32:40.928360   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 18:32:40.956482   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 18:32:40.983212   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 18:32:41.009753   14407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 18:32:41.036763   14407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 18:32:41.055755   14407 ssh_runner.go:195] Run: openssl version
	I0425 18:32:41.062007   14407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 18:32:41.075276   14407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:32:41.080158   14407 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:32:41.080212   14407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:32:41.086157   14407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 18:32:41.098669   14407 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 18:32:41.103292   14407 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 18:32:41.103338   14407 kubeadm.go:391] StartCluster: {Name:addons-477322 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-477322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:32:41.103404   14407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 18:32:41.103441   14407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 18:32:41.142243   14407 cri.go:89] found id: ""
	I0425 18:32:41.142323   14407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0425 18:32:41.153898   14407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 18:32:41.164693   14407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 18:32:41.175946   14407 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 18:32:41.175966   14407 kubeadm.go:156] found existing configuration files:
	
	I0425 18:32:41.176005   14407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 18:32:41.186659   14407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 18:32:41.186712   14407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 18:32:41.197809   14407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 18:32:41.208081   14407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 18:32:41.208155   14407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 18:32:41.219143   14407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 18:32:41.229854   14407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 18:32:41.229910   14407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 18:32:41.241178   14407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 18:32:41.252115   14407 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 18:32:41.252180   14407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 18:32:41.264070   14407 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 18:32:41.321297   14407 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 18:32:41.321372   14407 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 18:32:41.443720   14407 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 18:32:41.443857   14407 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 18:32:41.443962   14407 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 18:32:41.661182   14407 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 18:32:41.823901   14407 out.go:204]   - Generating certificates and keys ...
	I0425 18:32:41.824034   14407 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 18:32:41.824116   14407 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 18:32:42.144930   14407 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0425 18:32:42.316672   14407 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0425 18:32:42.467008   14407 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0425 18:32:42.724106   14407 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0425 18:32:42.913238   14407 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0425 18:32:42.913370   14407 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-477322 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0425 18:32:43.157029   14407 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0425 18:32:43.157201   14407 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-477322 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0425 18:32:43.351070   14407 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0425 18:32:43.565848   14407 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0425 18:32:43.869010   14407 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0425 18:32:43.869215   14407 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 18:32:44.088470   14407 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 18:32:44.468597   14407 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 18:32:44.644737   14407 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 18:32:45.018229   14407 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 18:32:45.141755   14407 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 18:32:45.142511   14407 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 18:32:45.144765   14407 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 18:32:45.147714   14407 out.go:204]   - Booting up control plane ...
	I0425 18:32:45.147854   14407 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 18:32:45.148842   14407 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 18:32:45.149620   14407 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 18:32:45.165670   14407 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 18:32:45.166136   14407 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 18:32:45.166198   14407 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 18:32:45.293502   14407 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 18:32:45.293624   14407 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 18:32:45.794844   14407 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.733075ms
	I0425 18:32:45.794919   14407 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 18:32:50.793605   14407 kubeadm.go:309] [api-check] The API server is healthy after 5.001976088s
	I0425 18:32:50.807832   14407 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 18:32:50.827916   14407 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 18:32:50.870695   14407 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 18:32:50.870886   14407 kubeadm.go:309] [mark-control-plane] Marking the node addons-477322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 18:32:50.886593   14407 kubeadm.go:309] [bootstrap-token] Using token: ys83sc.bekjayuufeldo30f
	I0425 18:32:50.888127   14407 out.go:204]   - Configuring RBAC rules ...
	I0425 18:32:50.888273   14407 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 18:32:50.897550   14407 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 18:32:50.909013   14407 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 18:32:50.915452   14407 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 18:32:50.919016   14407 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 18:32:50.922502   14407 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 18:32:51.199899   14407 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 18:32:51.637121   14407 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 18:32:52.199156   14407 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 18:32:52.200101   14407 kubeadm.go:309] 
	I0425 18:32:52.200181   14407 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 18:32:52.200192   14407 kubeadm.go:309] 
	I0425 18:32:52.200269   14407 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 18:32:52.200278   14407 kubeadm.go:309] 
	I0425 18:32:52.200321   14407 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 18:32:52.200405   14407 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 18:32:52.200478   14407 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 18:32:52.200488   14407 kubeadm.go:309] 
	I0425 18:32:52.200561   14407 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 18:32:52.200570   14407 kubeadm.go:309] 
	I0425 18:32:52.200637   14407 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 18:32:52.200647   14407 kubeadm.go:309] 
	I0425 18:32:52.200728   14407 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 18:32:52.200828   14407 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 18:32:52.200914   14407 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 18:32:52.200930   14407 kubeadm.go:309] 
	I0425 18:32:52.201003   14407 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 18:32:52.201068   14407 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 18:32:52.201075   14407 kubeadm.go:309] 
	I0425 18:32:52.201151   14407 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ys83sc.bekjayuufeldo30f \
	I0425 18:32:52.201253   14407 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed \
	I0425 18:32:52.201280   14407 kubeadm.go:309] 	--control-plane 
	I0425 18:32:52.201287   14407 kubeadm.go:309] 
	I0425 18:32:52.201380   14407 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 18:32:52.201412   14407 kubeadm.go:309] 
	I0425 18:32:52.201506   14407 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ys83sc.bekjayuufeldo30f \
	I0425 18:32:52.201657   14407 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed 
	I0425 18:32:52.202324   14407 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 18:32:52.202392   14407 cni.go:84] Creating CNI manager for ""
	I0425 18:32:52.202410   14407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 18:32:52.204979   14407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 18:32:52.206337   14407 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 18:32:52.218404   14407 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 18:32:52.243570   14407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 18:32:52.243681   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:52.243709   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-477322 minikube.k8s.io/updated_at=2024_04_25T18_32_52_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=addons-477322 minikube.k8s.io/primary=true
	I0425 18:32:52.300375   14407 ops.go:34] apiserver oom_adj: -16
	I0425 18:32:52.416994   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:52.917852   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:53.417204   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:53.917461   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:54.418006   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:54.917274   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:55.417275   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:55.918058   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:56.417114   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:56.917477   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:57.417066   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:57.917439   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:58.417199   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:58.918026   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:59.417113   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:32:59.917781   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:00.417749   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:00.917011   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:01.417152   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:01.917596   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:02.417553   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:02.917479   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:03.417472   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:03.918018   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:04.417534   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:04.917311   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:05.418092   14407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:33:05.942899   14407 kubeadm.go:1107] duration metric: took 13.699285685s to wait for elevateKubeSystemPrivileges
	W0425 18:33:05.942944   14407 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 18:33:05.942955   14407 kubeadm.go:393] duration metric: took 24.839620054s to StartCluster
	I0425 18:33:05.942977   14407 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:33:05.943172   14407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:33:05.943654   14407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:33:05.943960   14407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0425 18:33:05.944012   14407 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:33:05.945947   14407 out.go:177] * Verifying Kubernetes components...
	I0425 18:33:05.944130   14407 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0425 18:33:05.944225   14407 config.go:182] Loaded profile config "addons-477322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:33:05.947740   14407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:33:05.947752   14407 addons.go:69] Setting yakd=true in profile "addons-477322"
	I0425 18:33:05.947785   14407 addons.go:234] Setting addon yakd=true in "addons-477322"
	I0425 18:33:05.947807   14407 addons.go:69] Setting ingress-dns=true in profile "addons-477322"
	I0425 18:33:05.947822   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.947831   14407 addons.go:234] Setting addon ingress-dns=true in "addons-477322"
	I0425 18:33:05.947843   14407 addons.go:69] Setting registry=true in profile "addons-477322"
	I0425 18:33:05.947861   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.947867   14407 addons.go:69] Setting metrics-server=true in profile "addons-477322"
	I0425 18:33:05.947873   14407 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-477322"
	I0425 18:33:05.947883   14407 addons.go:69] Setting cloud-spanner=true in profile "addons-477322"
	I0425 18:33:05.947891   14407 addons.go:69] Setting default-storageclass=true in profile "addons-477322"
	I0425 18:33:05.947905   14407 addons.go:234] Setting addon cloud-spanner=true in "addons-477322"
	I0425 18:33:05.947908   14407 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-477322"
	I0425 18:33:05.947918   14407 addons.go:234] Setting addon metrics-server=true in "addons-477322"
	I0425 18:33:05.947922   14407 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-477322"
	I0425 18:33:05.947933   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.947938   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.947951   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.947955   14407 addons.go:69] Setting helm-tiller=true in profile "addons-477322"
	I0425 18:33:05.947974   14407 addons.go:234] Setting addon helm-tiller=true in "addons-477322"
	I0425 18:33:05.947990   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.948262   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948282   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948298   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948313   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948315   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948326   14407 addons.go:69] Setting inspektor-gadget=true in profile "addons-477322"
	I0425 18:33:05.948331   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948331   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948339   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948349   14407 addons.go:234] Setting addon inspektor-gadget=true in "addons-477322"
	I0425 18:33:05.948352   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948367   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948374   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.948371   14407 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-477322"
	I0425 18:33:05.948316   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948398   14407 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-477322"
	I0425 18:33:05.948417   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.947950   14407 addons.go:69] Setting gcp-auth=true in profile "addons-477322"
	I0425 18:33:05.948439   14407 mustload.go:65] Loading cluster: addons-477322
	I0425 18:33:05.948459   14407 addons.go:69] Setting ingress=true in profile "addons-477322"
	I0425 18:33:05.948482   14407 addons.go:234] Setting addon ingress=true in "addons-477322"
	I0425 18:33:05.948523   14407 addons.go:69] Setting volumesnapshots=true in profile "addons-477322"
	I0425 18:33:05.948547   14407 addons.go:234] Setting addon volumesnapshots=true in "addons-477322"
	I0425 18:33:05.948573   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.948594   14407 config.go:182] Loaded profile config "addons-477322": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:33:05.948668   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948689   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.947875   14407 addons.go:234] Setting addon registry=true in "addons-477322"
	I0425 18:33:05.948774   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948792   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.948850   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948898   14407 addons.go:69] Setting storage-provisioner=true in profile "addons-477322"
	I0425 18:33:05.948919   14407 addons.go:234] Setting addon storage-provisioner=true in "addons-477322"
	I0425 18:33:05.948930   14407 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-477322"
	I0425 18:33:05.948941   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948944   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.948968   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.948969   14407 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-477322"
	I0425 18:33:05.948983   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.949050   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.949117   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.949144   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.949257   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.949329   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.949715   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.949927   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.950297   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.950347   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.950419   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.950445   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.969960   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I0425 18:33:05.970191   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0425 18:33:05.970508   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.970600   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.971079   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.971080   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.971124   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.971108   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.971470   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.971506   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.972055   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.972078   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.972055   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.972127   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.972470   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42837
	I0425 18:33:05.982610   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.982663   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.982917   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42579
	I0425 18:33:05.983012   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I0425 18:33:05.983075   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43543
	I0425 18:33:05.983252   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.983993   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.984012   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.984078   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.984534   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.984602   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.984670   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.985089   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.985129   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.991231   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.991254   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.991409   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.991423   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.991535   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.991545   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.992476   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.992488   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.992538   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.992648   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45537
	I0425 18:33:05.993141   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:05.993205   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.993213   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:05.993241   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.993855   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:05.994435   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:05.994453   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:05.994834   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:05.995046   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:05.997336   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.997728   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.997749   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.999439   14407 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-477322"
	I0425 18:33:05.999480   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.999497   14407 addons.go:234] Setting addon default-storageclass=true in "addons-477322"
	I0425 18:33:05.999531   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:05.999819   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.999837   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:05.999915   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:05.999939   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.001554   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I0425 18:33:06.002084   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.002634   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.002657   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.003013   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.003541   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.003580   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.008778   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37749
	I0425 18:33:06.009470   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.009573   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37813
	I0425 18:33:06.010052   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.010076   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.010135   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.010572   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.010597   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.010911   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.011222   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.011459   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.011480   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.011733   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.011762   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.012518   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43295
	I0425 18:33:06.017103   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I0425 18:33:06.017390   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.018427   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.019002   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.019020   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.019381   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.019906   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.019944   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.021119   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.021137   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.021485   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.022052   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.022089   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.028780   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42485
	I0425 18:33:06.029372   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.030018   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.030037   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.032523   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I0425 18:33:06.032908   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.033494   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.033509   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.034116   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.034362   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.034967   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.035503   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.035540   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.036381   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.038648   14407 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0425 18:33:06.040889   14407 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0425 18:33:06.040907   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0425 18:33:06.040929   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.038730   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44209
	I0425 18:33:06.037297   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I0425 18:33:06.041253   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41697
	I0425 18:33:06.042234   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.042586   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.043032   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.043047   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.043512   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.043633   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.043938   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.043954   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.044407   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.044439   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.044658   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.045163   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.045187   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.045373   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.045525   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.045670   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.045805   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.046170   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.046183   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.046298   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.046835   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.047122   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.048936   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.049177   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.050745   14407 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0425 18:33:06.052339   14407 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 18:33:06.052357   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 18:33:06.052378   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.055538   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.056154   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.056184   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.056386   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.056546   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.056678   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.056791   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.057063   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45241
	I0425 18:33:06.057677   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.057764   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34169
	I0425 18:33:06.061243   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.061271   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.061335   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I0425 18:33:06.062069   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.062161   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0425 18:33:06.063316   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.063334   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.063507   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.063648   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.063700   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.063732   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.064272   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.064347   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.064362   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.064373   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0425 18:33:06.064350   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45793
	I0425 18:33:06.064831   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.064883   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.065103   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.065248   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.065260   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.065617   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.066008   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.066075   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.066147   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.066182   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.067711   14407 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0425 18:33:06.066734   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.067025   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.067314   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.067856   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.070638   14407 out.go:177]   - Using image docker.io/busybox:stable
	I0425 18:33:06.069101   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.070125   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.071922   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
	I0425 18:33:06.071988   14407 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0425 18:33:06.072416   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.073224   14407 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0425 18:33:06.074799   14407 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0425 18:33:06.074817   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0425 18:33:06.074833   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.072498   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37565
	I0425 18:33:06.072566   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0425 18:33:06.073189   14407 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0425 18:33:06.076678   14407 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0425 18:33:06.076692   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0425 18:33:06.076707   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.075525   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0425 18:33:06.073295   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.073604   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.073817   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.076912   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.073278   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0425 18:33:06.077892   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.075628   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.076450   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.078417   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40705
	I0425 18:33:06.078595   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.078609   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.078610   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.078624   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.078686   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.078921   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.078997   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.079045   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.079080   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.079325   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.079338   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.079383   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.079774   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.079826   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.079848   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.079861   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.079940   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.079951   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.080184   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.080421   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.080965   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:06.080998   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:06.081176   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.081317   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.081387   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.081427   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.081539   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.082980   14407 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0425 18:33:06.084314   14407 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0425 18:33:06.084331   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0425 18:33:06.084348   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.083066   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.082355   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.081978   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.084422   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.083385   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.083890   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.084011   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.085144   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.086503   14407 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 18:33:06.087812   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.087884   14407 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 18:33:06.087896   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 18:33:06.087908   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.086513   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.089297   14407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0425 18:33:06.085379   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.087130   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.087944   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.088199   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.088287   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.088528   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.090561   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43615
	I0425 18:33:06.090684   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0425 18:33:06.090708   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.090783   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.090855   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.091308   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.091761   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0425 18:33:06.091786   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.091797   14407 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0425 18:33:06.091853   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.091896   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.092057   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.092522   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.092576   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.093031   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.094279   14407 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0425 18:33:06.094292   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.094300   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0425 18:33:06.094312   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.093179   14407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0425 18:33:06.093241   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.093998   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.093998   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.094018   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.094019   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.094079   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.094728   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.096872   14407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0425 18:33:06.095640   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.095672   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0425 18:33:06.095681   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.095720   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.095869   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.095899   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.095918   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.095975   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.096671   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.097518   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.098108   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0425 18:33:06.098129   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.098169   14407 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0425 18:33:06.098180   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0425 18:33:06.098190   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.098216   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.098292   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.098313   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.098439   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.098551   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.098741   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.098888   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.099007   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.099308   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.099526   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.099922   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.101917   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.102269   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.103931   14407 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0425 18:33:06.102585   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.102758   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.102893   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.103077   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.103637   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.105170   14407 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0425 18:33:06.105180   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0425 18:33:06.105190   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.105216   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.105270   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.105284   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.105302   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.105339   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.106773   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0425 18:33:06.105793   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.105802   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.105912   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44929
	I0425 18:33:06.107812   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.109207   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0425 18:33:06.108097   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.108213   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.108210   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.108363   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.108459   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.110393   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.111695   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0425 18:33:06.110696   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.110790   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.112741   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.113797   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0425 18:33:06.112926   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.113043   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.114978   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0425 18:33:06.116249   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0425 18:33:06.115155   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.115199   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.118597   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	W0425 18:33:06.118267   14407 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:41796->192.168.39.239:22: read: connection reset by peer
	I0425 18:33:06.119908   14407 retry.go:31] will retry after 158.527244ms: ssh: handshake failed: read tcp 192.168.39.1:41796->192.168.39.239:22: read: connection reset by peer
	I0425 18:33:06.119656   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.119877   14407 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0425 18:33:06.121340   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0425 18:33:06.120162   14407 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 18:33:06.121361   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0425 18:33:06.121365   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 18:33:06.121384   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.121386   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.121265   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35229
	I0425 18:33:06.121825   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:06.122983   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:06.123005   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:06.123469   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:06.123739   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:06.125125   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.125401   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:06.125465   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.125715   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.125734   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.127103   14407 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0425 18:33:06.125893   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.126025   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.126029   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.128348   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.129594   14407 out.go:177]   - Using image docker.io/registry:2.8.3
	I0425 18:33:06.128548   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.128562   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.130867   14407 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0425 18:33:06.130884   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0425 18:33:06.130895   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:06.130964   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.131054   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.131102   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.131220   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.133835   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.134192   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:06.134239   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:06.134377   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:06.134539   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:06.134686   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:06.134802   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:06.556088   14407 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 18:33:06.556115   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0425 18:33:06.606644   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0425 18:33:06.651816   14407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:33:06.652191   14407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0425 18:33:06.678222   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0425 18:33:06.681615   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0425 18:33:06.690253   14407 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0425 18:33:06.690272   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0425 18:33:06.696050   14407 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 18:33:06.696067   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 18:33:06.747250   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0425 18:33:06.747272   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0425 18:33:06.749360   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 18:33:06.750651   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0425 18:33:06.762924   14407 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0425 18:33:06.762942   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0425 18:33:06.771911   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0425 18:33:06.797469   14407 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0425 18:33:06.797490   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0425 18:33:06.798726   14407 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0425 18:33:06.798742   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0425 18:33:06.808296   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 18:33:06.896731   14407 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0425 18:33:06.896760   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0425 18:33:06.923799   14407 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0425 18:33:06.923829   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0425 18:33:06.931916   14407 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 18:33:06.931933   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 18:33:06.989070   14407 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0425 18:33:06.989093   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0425 18:33:07.046858   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0425 18:33:07.046879   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0425 18:33:07.057601   14407 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0425 18:33:07.057619   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0425 18:33:07.140734   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 18:33:07.171248   14407 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0425 18:33:07.171265   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0425 18:33:07.188241   14407 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0425 18:33:07.188258   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0425 18:33:07.225678   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0425 18:33:07.232040   14407 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0425 18:33:07.232057   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0425 18:33:07.246141   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0425 18:33:07.246160   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0425 18:33:07.341420   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0425 18:33:07.406952   14407 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0425 18:33:07.406984   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0425 18:33:07.427563   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0425 18:33:07.427586   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0425 18:33:07.434125   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0425 18:33:07.434145   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0425 18:33:07.506614   14407 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0425 18:33:07.506642   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0425 18:33:07.668075   14407 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0425 18:33:07.668103   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0425 18:33:07.711218   14407 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0425 18:33:07.711237   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0425 18:33:07.763120   14407 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0425 18:33:07.763142   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0425 18:33:07.846931   14407 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0425 18:33:07.846952   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0425 18:33:08.064424   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0425 18:33:08.064445   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0425 18:33:08.075370   14407 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0425 18:33:08.075387   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0425 18:33:08.300653   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0425 18:33:08.427844   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0425 18:33:08.449781   14407 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0425 18:33:08.449811   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0425 18:33:08.470617   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0425 18:33:08.470639   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0425 18:33:08.724521   14407 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0425 18:33:08.724546   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0425 18:33:08.817882   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0425 18:33:08.817902   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0425 18:33:08.998500   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0425 18:33:09.322248   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0425 18:33:09.322267   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0425 18:33:09.750536   14407 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0425 18:33:09.750560   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0425 18:33:10.450037   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0425 18:33:11.994218   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.3875271s)
	I0425 18:33:11.994252   14407 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.342404193s)
	I0425 18:33:11.994280   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:11.994291   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:11.994286   14407 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.342069888s)
	I0425 18:33:11.994311   14407 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0425 18:33:11.994358   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.316107074s)
	I0425 18:33:11.994398   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:11.994411   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:11.994554   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:11.994604   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:11.994621   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:11.994624   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:11.994682   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:11.994687   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:11.994732   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:11.994744   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:11.994751   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:11.994761   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:11.994896   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:11.994913   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:11.995144   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:11.995155   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:11.995194   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:12.019068   14407 node_ready.go:35] waiting up to 6m0s for node "addons-477322" to be "Ready" ...
	I0425 18:33:12.130469   14407 node_ready.go:49] node "addons-477322" has status "Ready":"True"
	I0425 18:33:12.130501   14407 node_ready.go:38] duration metric: took 111.404224ms for node "addons-477322" to be "Ready" ...
	I0425 18:33:12.130514   14407 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:33:12.149903   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:12.149930   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:12.150265   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:12.150333   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:12.150350   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:12.245756   14407 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6wpfr" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.430039   14407 pod_ready.go:92] pod "coredns-7db6d8ff4d-6wpfr" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.430061   14407 pod_ready.go:81] duration metric: took 184.280371ms for pod "coredns-7db6d8ff4d-6wpfr" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.430071   14407 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w9mgq" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.535521   14407 pod_ready.go:92] pod "coredns-7db6d8ff4d-w9mgq" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.535553   14407 pod_ready.go:81] duration metric: took 105.475613ms for pod "coredns-7db6d8ff4d-w9mgq" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.535567   14407 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.556162   14407 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-477322" context rescaled to 1 replicas
	I0425 18:33:12.591846   14407 pod_ready.go:92] pod "etcd-addons-477322" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.591870   14407 pod_ready.go:81] duration metric: took 56.29632ms for pod "etcd-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.591879   14407 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.670472   14407 pod_ready.go:92] pod "kube-apiserver-addons-477322" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.670502   14407 pod_ready.go:81] duration metric: took 78.615552ms for pod "kube-apiserver-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.670515   14407 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.768097   14407 pod_ready.go:92] pod "kube-controller-manager-addons-477322" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.768120   14407 pod_ready.go:81] duration metric: took 97.597567ms for pod "kube-controller-manager-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.768131   14407 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rgvqp" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.864203   14407 pod_ready.go:92] pod "kube-proxy-rgvqp" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:12.864233   14407 pod_ready.go:81] duration metric: took 96.09537ms for pod "kube-proxy-rgvqp" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:12.864247   14407 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:13.087596   14407 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0425 18:33:13.087640   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:13.090723   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:13.091163   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:13.091191   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:13.091424   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:13.091641   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:13.091833   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:13.091970   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:13.244598   14407 pod_ready.go:92] pod "kube-scheduler-addons-477322" in "kube-system" namespace has status "Ready":"True"
	I0425 18:33:13.244622   14407 pod_ready.go:81] duration metric: took 380.367298ms for pod "kube-scheduler-addons-477322" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:13.244632   14407 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace to be "Ready" ...
	I0425 18:33:13.568658   14407 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0425 18:33:13.660837   14407 addons.go:234] Setting addon gcp-auth=true in "addons-477322"
	I0425 18:33:13.660896   14407 host.go:66] Checking if "addons-477322" exists ...
	I0425 18:33:13.661172   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:13.661197   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:13.676994   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I0425 18:33:13.677538   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:13.678062   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:13.678087   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:13.678465   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:13.678910   14407 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:33:13.678938   14407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:33:13.695280   14407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34497
	I0425 18:33:13.695819   14407 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:33:13.696324   14407 main.go:141] libmachine: Using API Version  1
	I0425 18:33:13.696351   14407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:33:13.696608   14407 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:33:13.696776   14407 main.go:141] libmachine: (addons-477322) Calling .GetState
	I0425 18:33:13.698233   14407 main.go:141] libmachine: (addons-477322) Calling .DriverName
	I0425 18:33:13.698432   14407 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0425 18:33:13.698450   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHHostname
	I0425 18:33:13.700904   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:13.701288   14407 main.go:141] libmachine: (addons-477322) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:55:42", ip: ""} in network mk-addons-477322: {Iface:virbr1 ExpiryTime:2024-04-25 19:32:25 +0000 UTC Type:0 Mac:52:54:00:d2:55:42 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-477322 Clientid:01:52:54:00:d2:55:42}
	I0425 18:33:13.701315   14407 main.go:141] libmachine: (addons-477322) DBG | domain addons-477322 has defined IP address 192.168.39.239 and MAC address 52:54:00:d2:55:42 in network mk-addons-477322
	I0425 18:33:13.701430   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHPort
	I0425 18:33:13.701582   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHKeyPath
	I0425 18:33:13.701751   14407 main.go:141] libmachine: (addons-477322) Calling .GetSSHUsername
	I0425 18:33:13.701908   14407 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/addons-477322/id_rsa Username:docker}
	I0425 18:33:15.260636   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:15.793560   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.044172301s)
	I0425 18:33:15.793622   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793634   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.793638   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.042962118s)
	I0425 18:33:15.793681   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793679   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.112020981s)
	I0425 18:33:15.793691   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.793710   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793748   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.021810986s)
	I0425 18:33:15.793817   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793763   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.793860   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.653100429s)
	I0425 18:33:15.793879   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.793884   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793895   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.793781   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.985463529s)
	I0425 18:33:15.793933   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.793946   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.793963   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794368   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.794003   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.794028   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.794407   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.794412   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794417   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794422   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794425   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794430   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794433   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794064   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.794422   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.795889724s)
	I0425 18:33:15.794472   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794482   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794088   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.794491   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794496   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794504   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794511   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794483   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794523   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794119   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.568411965s)
	I0425 18:33:15.794123   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.794546   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794553   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.794253   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.794579   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794587   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.794594   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.795266   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.795289   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.795308   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.795337   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.795345   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.795355   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.795364   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.795424   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.795432   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.795440   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.795451   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.795505   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.795528   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.795535   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.795754   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.795765   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.795776   14407 addons.go:470] Verifying addon metrics-server=true in "addons-477322"
	I0425 18:33:15.795832   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.795867   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.795876   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.794077   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.794330   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.493586195s)
	W0425 18:33:15.796340   14407 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0425 18:33:15.796360   14407 retry.go:31] will retry after 294.271271ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0425 18:33:15.796396   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.796422   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.796429   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.796479   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.796486   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.796642   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.796662   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.796668   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.796675   14407 addons.go:470] Verifying addon ingress=true in "addons-477322"
	I0425 18:33:15.794162   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.452704683s)
	I0425 18:33:15.794330   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.366446473s)
	I0425 18:33:15.797236   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.797255   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.797385   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.798573   14407 out.go:177] * Verifying ingress addon...
	I0425 18:33:15.798629   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.800495   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.798639   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.800539   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.800552   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.798642   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.800587   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.798653   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.800560   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.800822   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.800842   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.800855   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.800858   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.800864   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.800872   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.800888   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.800897   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.800980   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.800993   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.801002   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.801009   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.801328   14407 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0425 18:33:15.802100   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.802112   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.802105   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.802126   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.802107   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.802155   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:15.802163   14407 addons.go:470] Verifying addon registry=true in "addons-477322"
	I0425 18:33:15.804238   14407 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-477322 service yakd-dashboard -n yakd-dashboard
	
	I0425 18:33:15.805569   14407 out.go:177] * Verifying registry addon...
	I0425 18:33:15.807529   14407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0425 18:33:15.817239   14407 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0425 18:33:15.817254   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:15.838190   14407 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0425 18:33:15.838223   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:15.859853   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:15.859877   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:15.860120   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:15.860165   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:15.860175   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:16.091338   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0425 18:33:16.306582   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:16.341078   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:16.828513   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:16.828858   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:17.309805   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:17.315934   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:17.779252   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:17.834251   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:17.843985   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.393894349s)
	I0425 18:33:17.844012   14407 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.145558273s)
	I0425 18:33:17.844046   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:17.844061   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:17.845855   14407 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0425 18:33:17.844390   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:17.844430   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:17.847356   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:17.847368   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:17.847374   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:17.848824   14407 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0425 18:33:17.847696   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:17.847725   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:17.850145   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:17.850166   14407 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-477322"
	I0425 18:33:17.850182   14407 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0425 18:33:17.850199   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0425 18:33:17.851547   14407 out.go:177] * Verifying csi-hostpath-driver addon...
	I0425 18:33:17.853958   14407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0425 18:33:17.870815   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:17.878910   14407 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0425 18:33:17.878945   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:18.065904   14407 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0425 18:33:18.065923   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0425 18:33:18.194029   14407 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0425 18:33:18.194049   14407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0425 18:33:18.297738   14407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0425 18:33:18.307261   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:18.315947   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:18.361108   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:18.805482   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:18.811931   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:18.860849   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:19.058797   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.967403897s)
	I0425 18:33:19.058850   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:19.058871   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:19.059145   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:19.059230   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:19.059247   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:19.059255   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:19.059199   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:19.059542   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:19.059562   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:19.059571   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:19.306296   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:19.311815   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:19.359810   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:19.786115   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:19.854701   14407 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.556932672s)
	I0425 18:33:19.854746   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:19.854760   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:19.855028   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:19.855050   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:19.855053   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:19.855066   14407 main.go:141] libmachine: Making call to close driver server
	I0425 18:33:19.855083   14407 main.go:141] libmachine: (addons-477322) Calling .Close
	I0425 18:33:19.855398   14407 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:33:19.855416   14407 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:33:19.855462   14407 main.go:141] libmachine: (addons-477322) DBG | Closing plugin on server side
	I0425 18:33:19.857132   14407 addons.go:470] Verifying addon gcp-auth=true in "addons-477322"
	I0425 18:33:19.858799   14407 out.go:177] * Verifying gcp-auth addon...
	I0425 18:33:19.861172   14407 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0425 18:33:19.864370   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:19.865040   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:19.913020   14407 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0425 18:33:19.913044   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:19.913651   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:20.306884   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:20.314076   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:20.364700   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:20.367044   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:20.806573   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:20.813167   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:20.868369   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:20.870751   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:21.305995   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:21.312085   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:21.359803   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:21.369098   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:21.808020   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:21.812439   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:21.861762   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:21.865632   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:22.251413   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:22.306544   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:22.311444   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:22.363626   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:22.365260   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:22.806131   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:22.812629   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:22.860071   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:22.865097   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:23.306736   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:23.312372   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:23.362139   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:23.365983   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:23.805758   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:23.811989   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:23.860249   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:23.864353   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:24.251752   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:24.306423   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:24.312562   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:24.359808   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:24.364701   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:24.806552   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:24.813154   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:24.860008   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:24.864400   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:25.306763   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:25.312685   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:25.360634   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:25.365789   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:25.806827   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:25.812288   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:25.860904   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:25.864383   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:26.252006   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:26.305869   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:26.313315   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:26.359588   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:26.367081   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:26.806789   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:26.812132   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:26.860186   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:26.864986   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:27.306197   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:27.311840   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:27.359811   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:27.364956   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:27.806045   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:27.812688   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:27.868509   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:27.869679   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:28.307004   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:28.312918   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:28.360027   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:28.365119   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:28.750789   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:28.805831   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:28.811983   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:28.860500   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:28.865772   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:29.306856   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:29.312241   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:29.360836   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:29.366530   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:29.805623   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:29.812354   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:29.859147   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:29.864702   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:30.307855   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:30.326565   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:30.359566   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:30.370380   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:31.077878   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:31.080047   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:31.083255   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:31.084975   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:31.089823   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:31.308344   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:31.313651   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:31.359813   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:31.365225   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:31.806569   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:31.812748   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:31.861278   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:31.866519   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:32.307301   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:32.311370   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:32.361552   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:32.364389   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:32.807760   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:32.811700   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:32.859827   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:32.866330   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:33.252374   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:33.306430   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:33.311872   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:33.359678   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:33.364927   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:33.807100   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:33.812600   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:33.860311   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:33.865000   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:34.306787   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:34.311853   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:34.359961   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:34.364692   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:34.806393   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:34.811991   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:34.859930   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:34.864888   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:35.306786   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:35.311763   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:35.359665   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:35.364726   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:35.751068   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:35.807210   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:35.811423   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:35.860106   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:35.864127   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:36.307479   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:36.312370   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:36.359803   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:36.365124   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:36.806583   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:36.811706   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:36.861183   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:36.864113   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:37.306130   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:37.313113   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:37.360310   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:37.364514   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:37.756803   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:37.805999   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:37.812686   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:37.859688   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:37.865281   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:38.306141   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:38.316002   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:38.361127   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:38.365862   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:38.806792   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:38.812037   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:38.861468   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:38.865474   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:39.306331   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:39.312653   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:39.359974   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:39.364073   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:39.806055   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:39.812986   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:39.860426   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:39.864656   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:40.251034   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:40.306282   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:40.311993   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:40.359435   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:40.364513   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:41.129040   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:41.129717   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:41.130523   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:41.131077   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:41.306033   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:41.312532   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:41.361234   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:41.364475   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:41.805876   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:41.812230   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:41.860067   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:41.863862   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:42.251270   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:42.306130   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:42.312572   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:42.359325   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:42.364507   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:42.806151   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:42.813663   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:42.859408   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:42.864622   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:43.305522   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:43.311906   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:43.359435   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:43.364848   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:43.807001   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:43.812321   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:43.859774   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:43.865137   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:44.252053   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:44.314995   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:44.315081   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:44.361146   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:44.366486   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:44.813825   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:44.823559   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:44.860668   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:44.866930   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:45.306711   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:45.313084   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:45.360393   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:45.365595   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:45.808364   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:45.811272   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:45.860403   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:45.866280   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:46.255616   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:46.307104   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:46.316214   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:46.363096   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:46.365630   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:46.805929   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:46.812569   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:46.859419   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:46.864700   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:47.306238   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:47.315922   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:47.360741   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:47.364941   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:47.806245   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:47.817534   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:47.860546   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:47.864893   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:48.306663   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:48.312287   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:48.360173   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:48.364253   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:48.751846   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:48.806130   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:48.812581   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:48.860349   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:48.866901   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:49.306647   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:49.313375   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:49.359742   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:49.372608   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:49.809936   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:49.813659   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:49.859111   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:49.864206   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:50.305912   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:50.312908   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:50.359922   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:50.364282   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:50.752228   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:50.806394   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:50.812431   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:50.861240   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:50.864570   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:51.306781   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:51.312573   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:51.359919   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:51.364686   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:51.807183   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:51.812712   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:51.860298   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:51.865218   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:52.308033   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:52.312731   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:52.359736   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:52.365230   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:52.755882   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:52.806774   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:52.812371   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:52.934557   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:52.939011   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:53.306909   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:53.312510   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:53.360836   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:53.365407   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:53.806351   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:53.812266   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:53.859920   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:53.865413   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:54.307563   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:54.311753   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:54.360130   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:54.364760   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:54.806126   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:54.814788   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:54.859342   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:54.864643   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:55.511452   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:55.512347   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:55.512831   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:55.515177   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:55.517656   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:55.806785   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:55.814232   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:55.859880   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:55.868423   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:56.305798   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:56.311940   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:56.360245   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:56.364814   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:56.807745   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:56.819274   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:56.860458   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:56.865098   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:57.307296   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:57.311692   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:57.359863   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:57.365489   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:57.752390   14407 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"False"
	I0425 18:33:57.806529   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:57.812736   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:57.859925   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:57.865425   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:58.306572   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:58.313896   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:58.360112   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:58.366091   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:58.806133   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:58.812734   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:58.859731   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:58.865188   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:59.312259   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:59.316925   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:59.361097   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:59.365586   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:33:59.806668   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:33:59.814608   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:33:59.860182   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:33:59.864138   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:00.251718   14407 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace has status "Ready":"True"
	I0425 18:34:00.251744   14407 pod_ready.go:81] duration metric: took 47.007105611s for pod "nvidia-device-plugin-daemonset-4tmhd" in "kube-system" namespace to be "Ready" ...
	I0425 18:34:00.251761   14407 pod_ready.go:38] duration metric: took 48.121235014s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:34:00.251779   14407 api_server.go:52] waiting for apiserver process to appear ...
	I0425 18:34:00.251829   14407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:34:00.270545   14407 api_server.go:72] duration metric: took 54.326488387s to wait for apiserver process to appear ...
	I0425 18:34:00.270582   14407 api_server.go:88] waiting for apiserver healthz status ...
	I0425 18:34:00.270604   14407 api_server.go:253] Checking apiserver healthz at https://192.168.39.239:8443/healthz ...
	I0425 18:34:00.274815   14407 api_server.go:279] https://192.168.39.239:8443/healthz returned 200:
	ok
	I0425 18:34:00.275915   14407 api_server.go:141] control plane version: v1.30.0
	I0425 18:34:00.275938   14407 api_server.go:131] duration metric: took 5.347958ms to wait for apiserver health ...
	I0425 18:34:00.275949   14407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 18:34:00.285339   14407 system_pods.go:59] 18 kube-system pods found
	I0425 18:34:00.285371   14407 system_pods.go:61] "coredns-7db6d8ff4d-6wpfr" [a4f7208b-0870-4a3c-bb2e-e6ad6d87404b] Running
	I0425 18:34:00.285382   14407 system_pods.go:61] "csi-hostpath-attacher-0" [c938c096-1833-4f10-b4fc-27cda6579f8b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0425 18:34:00.285390   14407 system_pods.go:61] "csi-hostpath-resizer-0" [e4a15e27-1979-40da-a400-a7fc1b6fe78c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0425 18:34:00.285401   14407 system_pods.go:61] "csi-hostpathplugin-fprlv" [b9e25dba-dbbc-46ee-be05-349125de51e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0425 18:34:00.285408   14407 system_pods.go:61] "etcd-addons-477322" [e6e3f83f-3036-4a38-8c6b-2a64085baec5] Running
	I0425 18:34:00.285413   14407 system_pods.go:61] "kube-apiserver-addons-477322" [d33f75a1-63a3-4dd6-b700-c6df57e50bed] Running
	I0425 18:34:00.285419   14407 system_pods.go:61] "kube-controller-manager-addons-477322" [dda70622-1ef9-4f3f-8e04-d40e44885694] Running
	I0425 18:34:00.285426   14407 system_pods.go:61] "kube-ingress-dns-minikube" [c2b29e86-902f-43bc-95db-5900cc3f5725] Running
	I0425 18:34:00.285434   14407 system_pods.go:61] "kube-proxy-rgvqp" [aa79ab2f-3125-426d-a63a-8dba44e5e06c] Running
	I0425 18:34:00.285439   14407 system_pods.go:61] "kube-scheduler-addons-477322" [0e99db52-9c82-4715-a6a2-dc9e90dcb9c1] Running
	I0425 18:34:00.285454   14407 system_pods.go:61] "metrics-server-c59844bb4-bw7rc" [5e6ef0c9-2d28-429e-a92f-7bb24314635d] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 18:34:00.285462   14407 system_pods.go:61] "nvidia-device-plugin-daemonset-4tmhd" [e5294b6c-a965-4df2-8c07-1696d3c1ea57] Running
	I0425 18:34:00.285472   14407 system_pods.go:61] "registry-proxy-vcjwf" [daff0d5c-8ea3-43fd-948e-5ac439d1a5a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0425 18:34:00.285485   14407 system_pods.go:61] "registry-wf47l" [0d3a67d8-466b-42fa-8b7b-e306fee91c84] Running
	I0425 18:34:00.285496   14407 system_pods.go:61] "snapshot-controller-745499f584-8fj49" [bb9e98cb-566f-4856-a7c4-5ae8da1442f4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0425 18:34:00.285507   14407 system_pods.go:61] "snapshot-controller-745499f584-q6cdl" [8f39480c-bcbe-4ed0-8f86-c5afca6fda25] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0425 18:34:00.285516   14407 system_pods.go:61] "storage-provisioner" [930ba2a2-a45e-4db3-9e58-f57677e70097] Running
	I0425 18:34:00.285527   14407 system_pods.go:61] "tiller-deploy-6677d64bcd-dkd7m" [aa079112-30fb-4401-9271-cf4059a1c2ce] Running
	I0425 18:34:00.285537   14407 system_pods.go:74] duration metric: took 9.579541ms to wait for pod list to return data ...
	I0425 18:34:00.285550   14407 default_sa.go:34] waiting for default service account to be created ...
	I0425 18:34:00.287890   14407 default_sa.go:45] found service account: "default"
	I0425 18:34:00.287909   14407 default_sa.go:55] duration metric: took 2.349805ms for default service account to be created ...
	I0425 18:34:00.287917   14407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 18:34:00.296368   14407 system_pods.go:86] 18 kube-system pods found
	I0425 18:34:00.296395   14407 system_pods.go:89] "coredns-7db6d8ff4d-6wpfr" [a4f7208b-0870-4a3c-bb2e-e6ad6d87404b] Running
	I0425 18:34:00.296403   14407 system_pods.go:89] "csi-hostpath-attacher-0" [c938c096-1833-4f10-b4fc-27cda6579f8b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0425 18:34:00.296412   14407 system_pods.go:89] "csi-hostpath-resizer-0" [e4a15e27-1979-40da-a400-a7fc1b6fe78c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0425 18:34:00.296423   14407 system_pods.go:89] "csi-hostpathplugin-fprlv" [b9e25dba-dbbc-46ee-be05-349125de51e4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0425 18:34:00.296440   14407 system_pods.go:89] "etcd-addons-477322" [e6e3f83f-3036-4a38-8c6b-2a64085baec5] Running
	I0425 18:34:00.296447   14407 system_pods.go:89] "kube-apiserver-addons-477322" [d33f75a1-63a3-4dd6-b700-c6df57e50bed] Running
	I0425 18:34:00.296457   14407 system_pods.go:89] "kube-controller-manager-addons-477322" [dda70622-1ef9-4f3f-8e04-d40e44885694] Running
	I0425 18:34:00.296464   14407 system_pods.go:89] "kube-ingress-dns-minikube" [c2b29e86-902f-43bc-95db-5900cc3f5725] Running
	I0425 18:34:00.296474   14407 system_pods.go:89] "kube-proxy-rgvqp" [aa79ab2f-3125-426d-a63a-8dba44e5e06c] Running
	I0425 18:34:00.296481   14407 system_pods.go:89] "kube-scheduler-addons-477322" [0e99db52-9c82-4715-a6a2-dc9e90dcb9c1] Running
	I0425 18:34:00.296492   14407 system_pods.go:89] "metrics-server-c59844bb4-bw7rc" [5e6ef0c9-2d28-429e-a92f-7bb24314635d] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 18:34:00.296499   14407 system_pods.go:89] "nvidia-device-plugin-daemonset-4tmhd" [e5294b6c-a965-4df2-8c07-1696d3c1ea57] Running
	I0425 18:34:00.296507   14407 system_pods.go:89] "registry-proxy-vcjwf" [daff0d5c-8ea3-43fd-948e-5ac439d1a5a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0425 18:34:00.296514   14407 system_pods.go:89] "registry-wf47l" [0d3a67d8-466b-42fa-8b7b-e306fee91c84] Running
	I0425 18:34:00.296520   14407 system_pods.go:89] "snapshot-controller-745499f584-8fj49" [bb9e98cb-566f-4856-a7c4-5ae8da1442f4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0425 18:34:00.296529   14407 system_pods.go:89] "snapshot-controller-745499f584-q6cdl" [8f39480c-bcbe-4ed0-8f86-c5afca6fda25] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0425 18:34:00.296537   14407 system_pods.go:89] "storage-provisioner" [930ba2a2-a45e-4db3-9e58-f57677e70097] Running
	I0425 18:34:00.296548   14407 system_pods.go:89] "tiller-deploy-6677d64bcd-dkd7m" [aa079112-30fb-4401-9271-cf4059a1c2ce] Running
	I0425 18:34:00.296563   14407 system_pods.go:126] duration metric: took 8.637829ms to wait for k8s-apps to be running ...
	I0425 18:34:00.296575   14407 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 18:34:00.296622   14407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:34:00.306733   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:00.312479   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:00.315772   14407 system_svc.go:56] duration metric: took 19.187649ms WaitForService to wait for kubelet
	I0425 18:34:00.315804   14407 kubeadm.go:576] duration metric: took 54.371751122s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 18:34:00.315829   14407 node_conditions.go:102] verifying NodePressure condition ...
	I0425 18:34:00.319177   14407 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:34:00.319205   14407 node_conditions.go:123] node cpu capacity is 2
	I0425 18:34:00.319227   14407 node_conditions.go:105] duration metric: took 3.391731ms to run NodePressure ...
	I0425 18:34:00.319242   14407 start.go:240] waiting for startup goroutines ...
	I0425 18:34:00.360096   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:00.364692   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:00.806737   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:00.812043   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:00.859342   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:00.864106   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:01.306725   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:01.313443   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:01.361349   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:01.365136   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:01.807564   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:01.812114   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:01.862167   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:01.868840   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:02.307469   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:02.315552   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:02.360564   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:02.364745   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:02.807056   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:02.812847   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:02.860950   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:02.866181   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:03.307223   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:03.312617   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:03.361998   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:03.368475   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:04.104946   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:04.105616   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:04.107070   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:04.111038   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:04.306617   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:04.312127   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:04.360369   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:04.365178   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:04.807494   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:04.811947   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:04.859974   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:04.865477   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:05.305602   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:05.314081   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:05.359686   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:05.365333   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:05.806523   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:05.811908   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:05.860061   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:05.865279   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:06.306180   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:06.312458   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:06.361558   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:06.364412   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:06.806992   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:06.813035   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0425 18:34:06.860760   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:06.865315   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:07.305798   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:07.316188   14407 kapi.go:107] duration metric: took 51.50865662s to wait for kubernetes.io/minikube-addons=registry ...
	I0425 18:34:07.359932   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:07.365341   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:07.809873   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:07.859303   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:07.864569   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:08.306160   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:08.361064   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:08.364880   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:08.806891   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:08.860301   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:08.864691   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:09.308007   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:09.360115   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:09.364478   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:09.806603   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:09.860328   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:09.864397   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:10.306521   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:10.361128   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:10.365356   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:10.806904   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:11.121012   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:11.124533   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:11.306163   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:11.360193   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:11.367460   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:11.806250   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:11.859568   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:11.864725   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:12.307041   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:12.366629   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:12.371470   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:12.809224   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:12.859811   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:12.866274   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:13.311932   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:13.365770   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:13.365956   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:13.807011   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:13.862337   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:13.864775   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:14.310298   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:14.360874   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:14.366318   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:14.806329   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:14.860534   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:14.864894   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:15.306916   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:15.360769   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:15.365925   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:15.807435   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:15.861298   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:15.864563   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:16.307602   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:16.359046   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:16.364408   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:16.806000   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:16.859769   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:16.865002   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:17.307318   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:17.361315   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:17.364687   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:17.807242   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:17.860090   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:17.865472   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:18.312112   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:18.371877   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:18.377610   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:18.806692   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:18.860861   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:18.866381   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:19.307751   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:19.367438   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:19.370470   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:19.810987   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:19.869722   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:19.869739   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:20.307136   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:20.374680   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:20.380618   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:20.806852   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:20.860783   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:20.866087   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:21.306886   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:21.360466   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:21.366366   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:21.806323   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:21.861793   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:21.864599   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:22.308163   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:22.360043   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:22.364956   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:22.809017   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:22.861876   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:22.870617   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:23.307206   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:23.360316   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:23.364544   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:23.806328   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:23.860632   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:23.864903   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:24.307177   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:24.359761   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:24.364928   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:24.806591   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:24.860107   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:24.864314   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:25.306005   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:25.359370   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:25.364256   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:25.806866   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:25.859490   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:25.865047   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:26.307181   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:26.360198   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:26.364619   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:26.806180   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:26.872441   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:26.876768   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:27.313347   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:27.363339   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:27.365093   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:27.807145   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:27.861674   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:27.865838   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:28.407125   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:28.408050   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:28.408199   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:28.809788   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:28.860317   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:28.864387   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:29.306649   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:29.359719   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:29.365149   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:29.807437   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:29.859530   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:29.864627   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:30.309553   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:30.359859   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:30.366649   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:30.805776   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:30.859397   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:30.867185   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:31.311124   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:31.362072   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:31.367818   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:31.806630   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:31.866808   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:31.870755   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:32.307156   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:32.359375   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:32.364421   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:32.807597   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:32.860267   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:32.864953   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:33.306962   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:33.359151   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:33.364105   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:33.806703   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:33.860422   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:33.864879   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:34.306867   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:34.359176   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:34.364408   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:34.806420   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:34.862008   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:34.866167   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:35.305471   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:35.360384   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:35.364338   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:35.809549   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:35.860043   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0425 18:34:35.865689   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:36.305933   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:36.359334   14407 kapi.go:107] duration metric: took 1m18.505375184s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0425 18:34:36.364546   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:36.807249   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:36.871233   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:37.309131   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:37.365660   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:37.807885   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:37.865508   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:38.308137   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:38.368739   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:38.807923   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:38.865936   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:39.306854   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:39.364848   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:39.806737   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:39.866482   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:40.308028   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:40.365882   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:40.806527   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:40.864847   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:41.306511   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:41.365913   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:41.807151   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:41.865524   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:42.306738   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:42.365921   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:42.807057   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:42.864877   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:43.306555   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:43.365507   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:43.806598   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:43.866087   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:44.307840   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:44.366171   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:44.810132   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:44.865266   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:45.306562   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:45.365994   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:45.806963   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:45.865393   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:46.305783   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:46.365878   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:46.807216   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:46.865339   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:47.306367   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:47.366450   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:47.806294   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:47.864823   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:48.306729   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:48.365634   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:48.807383   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:48.865239   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:49.306850   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:49.365072   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:49.810818   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:49.865075   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:50.307705   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:50.365945   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:50.806981   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:50.864486   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:51.306258   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:51.365201   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:51.806916   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:51.867838   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:52.306821   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:52.368009   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:52.807000   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:52.864898   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:53.309341   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:53.365827   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:53.806469   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:53.865439   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:54.307587   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:54.366182   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:54.927853   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:54.928078   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:55.306735   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:55.367364   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:55.809842   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:55.864973   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:56.307018   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:56.366531   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:56.807133   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:56.865391   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:57.306582   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:57.365713   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:57.807572   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:57.868704   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:58.307846   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:58.365006   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:58.807938   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:58.865004   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:59.306201   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:59.366560   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:34:59.807210   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:34:59.864847   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:00.307661   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:00.366465   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:00.807482   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:00.868125   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:01.307445   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:01.365271   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:01.807892   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:01.864985   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:02.309153   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:02.365939   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:02.807272   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:02.865205   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:03.306478   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:03.365547   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:03.806104   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:03.866176   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:04.306168   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:04.365975   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:04.806382   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:04.865203   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:05.307578   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:05.365569   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:05.805993   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:05.865381   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:06.310289   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:06.365837   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:06.807014   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:06.865950   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:07.311834   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:07.365227   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:07.806929   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:07.865172   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:08.308029   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:08.365869   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:08.807050   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:08.866054   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:09.306924   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:09.365483   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:09.808720   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:09.865797   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:10.306760   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:10.365749   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:10.806402   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:10.865486   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:11.307956   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:11.365171   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:11.805830   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:11.865773   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:12.310568   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:12.367173   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:12.805884   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:12.866403   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:13.306611   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:13.366622   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:13.806615   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:13.867732   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:14.307873   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:14.364845   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:14.807133   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:14.865193   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:15.310876   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:15.364714   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:15.806474   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:15.865675   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:16.309504   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:16.365575   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:16.808066   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:16.865237   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:17.307402   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:17.365814   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:17.807576   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:17.865163   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:18.319725   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:18.365677   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:18.806219   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:18.867245   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:19.312191   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:19.364944   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:19.807177   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:19.865272   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:20.307136   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:20.365481   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:20.806683   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:20.865401   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:21.306563   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:21.365464   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:21.806392   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:21.865349   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:22.305649   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:22.365508   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:22.808896   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:22.865694   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:23.306594   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:23.365382   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:23.808202   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:23.865277   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:24.307158   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:24.365158   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:24.809807   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:24.865696   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:25.307077   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:25.365756   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:25.806788   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:25.865146   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:26.307448   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:26.366090   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:26.807105   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:26.864963   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:27.306553   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:27.365597   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:27.812348   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:27.865673   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:28.306252   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:28.365228   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:28.805953   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:28.865568   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:29.306063   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:29.365233   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:29.806004   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:29.864426   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:30.306814   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:30.364265   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:30.805902   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:30.865324   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:31.306873   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:31.364938   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:31.807208   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:31.864636   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:32.306992   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:32.365388   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:32.806288   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:32.864628   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:33.307678   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:33.368337   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:33.806112   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:33.864882   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:34.306220   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:34.365791   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:34.806472   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:34.865427   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:35.305645   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:35.365235   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:35.805669   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:35.865199   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:36.305614   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:36.365835   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:36.806171   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:36.865512   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:37.305942   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:37.364547   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:37.806117   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:37.864480   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:38.306067   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:38.364789   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:38.806552   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:38.865633   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:39.310910   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:39.364635   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:39.807596   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:39.868626   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:40.306633   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:40.366003   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:40.805892   14407 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0425 18:35:40.864118   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:41.306236   14407 kapi.go:107] duration metric: took 2m25.504906965s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0425 18:35:41.365468   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:41.864944   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:42.367399   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:42.865902   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:43.365735   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:44.038176   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:44.365493   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:44.864950   14407 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0425 18:35:45.364588   14407 kapi.go:107] duration metric: took 2m25.503414037s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0425 18:35:45.366384   14407 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-477322 cluster.
	I0425 18:35:45.367725   14407 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0425 18:35:45.369101   14407 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0425 18:35:45.370597   14407 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, metrics-server, helm-tiller, ingress-dns, inspektor-gadget, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0425 18:35:45.371895   14407 addons.go:505] duration metric: took 2m39.427770005s for enable addons: enabled=[nvidia-device-plugin storage-provisioner-rancher storage-provisioner metrics-server helm-tiller ingress-dns inspektor-gadget cloud-spanner yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0425 18:35:45.371936   14407 start.go:245] waiting for cluster config update ...
	I0425 18:35:45.371957   14407 start.go:254] writing updated cluster config ...
	I0425 18:35:45.372197   14407 ssh_runner.go:195] Run: rm -f paused
	I0425 18:35:45.423823   14407 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 18:35:45.425178   14407 out.go:177] * Done! kubectl is now configured to use "addons-477322" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.876631119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=144306a3-f9d7-4c8a-84bc-e892a9172cf9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.876956942Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ccb67610efb69bc365548edb3198a2dff3a42514865ab1033b33e7f7b5c90af,PodSandboxId:7d4bc39231a790c6b454e328ee9ca88553ff3d167528fe0a2baf513490142817,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714070331473706014,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-nstfm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88ef2b0e-e7d8-48d4-b29b-658685abefae,},Annotations:map[string]string{io.kubernetes.container.hash: ba810db,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402c7e90494399d2feeaa235e691145866b2725e37aa478f5804487a743ac56d,PodSandboxId:105e8c1342c557b597d234bbc587695ba49b3c540dbe40ac0c65b9342cca3c2f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714070191079882306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 174bca0d-e34d-4acf-8cb7-74f929b70346,},Annotations:map[string]string{io.kuberne
tes.container.hash: 73757bdd,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e423af13f38271273791d9ffaaba540df7d18373a078a69cd5a8ffe096ab0c6,PodSandboxId:5e588693571d85f475e2522defcd89fa2b3eb4972947ef0afebf135f7ddc22e2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714070168169033617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-4hdvs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: b0b1c3bf-f2b2-4b6a-ba59-104181e36d01,},Annotations:map[string]string{io.kubernetes.container.hash: c3244dca,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444cf98d597b26ac307437fc04a6576f39c4ddc200c2eeb2e0444204f26594e7,PodSandboxId:a90eb47d1f5c3908965f516b8db8a75cc1a875de777df4706de32481860f2794,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714070144465611562,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fmcbp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8ed5953a-1f88-4b6d-abba-be0571627016,},Annotations:map[string]string{io.kubernetes.container.hash: dbc4a0ba,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df04bc6e0a8645fec759d27fa1ffcc26a8380f5ad630eeba571a082084dfe0cf,PodSandboxId:82d5a69e4c3c29ae7933af38b110fb706734a0d466fa1fc222a57a98f99d5387,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171407
0052690182505,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-z4ljv,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3df1cc7b-c249-4597-b8c9-3a9b4bc48222,},Annotations:map[string]string{io.kubernetes.container.hash: 27dec842,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c,PodSandboxId:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714070000274234940,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bw7rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,},Annotations:map[string]string{io.kubernetes.container.hash: 6ea71e24,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a581b2bef974518ff15839d7127b97175c6ca2c11630a8877145f8e707dacfa,PodSandboxId:4a825f45bb82f480f19c760f92f5fb3d1cd992a4a2a5607cf40300022a7a04bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714069995016203233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930ba2a2-a45e-4db3-9e58-f57677e70097,},Annotations:map[string]string{io.kubernetes.container.hash: f492499d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04a27897034cedb321fa5f06387e220bd535ffa851de1660e5098a7206068c5,PodSandboxId:c6282053a094a8dd1a76c99595926343e07c5331a83796e173f5d3fdaf89494e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c007
97ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714069989920485509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6wpfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4f7208b-0870-4a3c-bb2e-e6ad6d87404b,},Annotations:map[string]string{io.kubernetes.container.hash: 7416a455,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d13c42367e56a88594713117ba450b13bde86d14fdd1911ed31bcae79c6255,PodSand
boxId:3c411906655780331b0753e2372b30e75495c6fd8632c325dc411fb29f55f4e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714069986854907049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgvqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa79ab2f-3125-426d-a63a-8dba44e5e06c,},Annotations:map[string]string{io.kubernetes.container.hash: a2478d34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba098b391087ab69c154d60e93cdbca9709dae3e860e358078373ea832309cad,PodSandboxId:c9baef8b5a1b164f4c9c26b4322
97e34f97ec6569ead5e5a61f84c686cace732,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714069966399786701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9ea0a35cb7ac41978bfcc3c445f98ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5bfc3a10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9db646bf6dbf0e9d7d21d563363f55428cc69781ff0b871042fc82cd43a56d,PodSandboxId:51bd1af867d66ae37df43e25a0d4fa0940a5273537029b7bbc608342f253ffc6,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714069966287069223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1f6a44bb1fb2be1ae94c311e3fa409,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbbc3655cb9eee9c48e5c703032e6c66e0f3c1d8fe46c50b43c2e8e617986f7,PodSandboxId:05fdcdfc675f3db365c6e01088655c9ffc8b307f104d0e356bd3034d2a6c2397,Metadata:&ContainerMetadat
a{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714069966335430671,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad0cd299b604c07a812a0bc88262082,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ce0b80f86d5e85292f94da6f1cd5d7db205853dfcfe415aa0059ccb450f83,PodSandboxId:9fc9b6d3e29c535836c0dabd618a8f703355936625aa638d0e448264019d0a04,Metadata:&Contai
nerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714069966258899721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de2573bdfcfa3e02e7bc88b90313a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: c53b7525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=144306a3-f9d7-4c8a-84bc-e892a9172cf9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.885877357Z" level=debug msg="Found exit code for 1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c: 0" file="oci/runtime_oci.go:1022"
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.886254736Z" level=debug msg="Skipping status update for: &{State:{Version:1.0.2-dev ID:1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.hash:6ea71e24 io.kubernetes.container.name:metrics-server io.kubernetes.container.ports:[{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}] io.kubernetes.container.restartCount:0 io.kubernetes.container.terminationMessagePath:/dev/termination-log io.kubernetes.container.terminationMessagePolicy:File io.kubernetes.cri-o.Annotations:{\"io.kubernetes.container.hash\":\"6ea71e24\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"https\\\",\\\"containerPort\\\":4443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.c
ontainer.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"} io.kubernetes.cri-o.ContainerID:1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c io.kubernetes.cri-o.ContainerType:container io.kubernetes.cri-o.Created:2024-04-25T18:33:20.274385902Z io.kubernetes.cri-o.IP.0:10.244.0.9 io.kubernetes.cri-o.Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872 io.kubernetes.cri-o.ImageName:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a io.kubernetes.cri-o.ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62 io.kubernetes.cri-o.Labels:{\"io.kubernetes.container.name\":\"metrics-server\",\"io.kubernetes.pod.name\":\"metrics-server-c59844bb4-bw7rc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5e6ef0c9-2
d28-429e-a92f-7bb24314635d\"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-bw7rc_5e6ef0c9-2d28-429e-a92f-7bb24314635d/metrics-server/0.log io.kubernetes.cri-o.Metadata:{\"name\":\"metrics-server\"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/e8c2607be0a94ce0e8070ba8b26e63c4b7dd1ba70e477135dca669a3977300a0/merged io.kubernetes.cri-o.Name:k8s_metrics-server_metrics-server-c59844bb4-bw7rc_kube-system_5e6ef0c9-2d28-429e-a92f-7bb24314635d_0 io.kubernetes.cri-o.PlatformRuntimePath: io.kubernetes.cri-o.ResolvPath:/var/run/containers/storage/overlay-containers/8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd/userdata/resolv.conf io.kubernetes.cri-o.SandboxID:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd io.kubernetes.cri-o.SandboxName:k8s_metrics-server-c59844bb4-bw7rc_kube-system_5e6ef0c9-2d28-429e-a92f-7bb24314635d_0 io.kubernetes.cri-o.SeccompProfilePath:Unconfined io.kubernetes.cri-o.Stdin:false io.kubernetes.cri-o.StdinOn
ce:false io.kubernetes.cri-o.TTY:false io.kubernetes.cri-o.Volumes:[{\"container_path\":\"/tmp\",\"host_path\":\"/var/lib/kubelet/pods/5e6ef0c9-2d28-429e-a92f-7bb24314635d/volumes/kubernetes.io~empty-dir/tmp-dir\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5e6ef0c9-2d28-429e-a92f-7bb24314635d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5e6ef0c9-2d28-429e-a92f-7bb24314635d/containers/metrics-server/8f6bfa0a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5e6ef0c9-2d28-429e-a92f-7bb24314635d/volumes/kubernetes.io~projected/kube-api-access-9llpj\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}] io.kubernetes.pod.name:metrics-server-c59844bb4-bw7rc io.kubernetes.pod.na
mespace:kube-system io.kubernetes.pod.terminationGracePeriod:30 io.kubernetes.pod.uid:5e6ef0c9-2d28-429e-a92f-7bb24314635d kubernetes.io/config.seen:2024-04-25T18:33:12.336880873Z kubernetes.io/config.source:api]} Created:2024-04-25 18:33:20.363435081 +0000 UTC Started:2024-04-25 18:33:20.462434686 +0000 UTC m=+45.268263192 Finished:2024-04-25 18:41:44.831058181 +0000 UTC ExitCode:0xc0010cb800 OOMKilled:false SeccompKilled:false Error: InitPid:3946 InitStartTime:6717 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}" file="oci/runtime_oci.go:946" id=e9f9c524-d65f-4546-9882-c73b06116bb9 name=/runtime.v1.RuntimeService/StopContainer
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.889425416Z" level=info msg="Stopped container 1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c: kube-system/metrics-server-c59844bb4-bw7rc/metrics-server" file="server/container_stop.go:29" id=e9f9c524-d65f-4546-9882-c73b06116bb9 name=/runtime.v1.RuntimeService/StopContainer
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.889535545Z" level=debug msg="Response: &StopContainerResponse{}" file="otel-collector/interceptors.go:74" id=e9f9c524-d65f-4546-9882-c73b06116bb9 name=/runtime.v1.RuntimeService/StopContainer
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.889446233Z" level=debug msg="Event: REMOVE        \"/var/run/crio/exits/1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c\"" file="server/server.go:805"
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.890080407Z" level=debug msg="Request: &StopPodSandboxRequest{PodSandboxId:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,}" file="otel-collector/interceptors.go:62" id=10d3c0c9-d3b6-4fb4-a4f2-f4e90a5fc679 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.890126020Z" level=info msg="Stopping pod sandbox: 8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd" file="server/sandbox_stop.go:18" id=10d3c0c9-d3b6-4fb4-a4f2-f4e90a5fc679 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.890976375Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-bw7rc Namespace:kube-system ID:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd UID:5e6ef0c9-2d28-429e-a92f-7bb24314635d NetNS:/var/run/netns/fa04ee89-e53c-42bb-90e3-1b1b7202a43c Networks:[{Name:bridge Ifname:eth0}] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/pod5e6ef0c9-2d28-429e-a92f-7bb24314635d PodAnnotations:0xc00199e4f8}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.891187653Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-bw7rc from CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:667"
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.920144672Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=d73ca588-0fe9-489f-b505-4180c7b1f5a8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.920554373Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7d4bc39231a790c6b454e328ee9ca88553ff3d167528fe0a2baf513490142817,Metadata:&PodSandboxMetadata{Name:hello-world-app-86c47465fc-nstfm,Uid:88ef2b0e-e7d8-48d4-b29b-658685abefae,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714070327990977229,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-86c47465fc-nstfm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88ef2b0e-e7d8-48d4-b29b-658685abefae,pod-template-hash: 86c47465fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:38:47.674249728Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:105e8c1342c557b597d234bbc587695ba49b3c540dbe40ac0c65b9342cca3c2f,Metadata:&PodSandboxMetadata{Name:nginx,Uid:174bca0d-e34d-4acf-8cb7-74f929b70346,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1714070186631046684,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 174bca0d-e34d-4acf-8cb7-74f929b70346,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:36:26.312938105Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e588693571d85f475e2522defcd89fa2b3eb4972947ef0afebf135f7ddc22e2,Metadata:&PodSandboxMetadata{Name:headlamp-7559bf459f-4hdvs,Uid:b0b1c3bf-f2b2-4b6a-ba59-104181e36d01,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714070161185421519,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7559bf459f-4hdvs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b0b1c3bf-f2b2-4b6a-ba59-104181e36d01,pod-template-hash: 7559bf459f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
04-25T18:36:00.503979891Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a90eb47d1f5c3908965f516b8db8a75cc1a875de777df4706de32481860f2794,Metadata:&PodSandboxMetadata{Name:gcp-auth-5db96cd9b4-fmcbp,Uid:8ed5953a-1f88-4b6d-abba-be0571627016,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714070139099612039,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fmcbp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8ed5953a-1f88-4b6d-abba-be0571627016,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 5db96cd9b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:33:19.785546937Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:82d5a69e4c3c29ae7933af38b110fb706734a0d466fa1fc222a57a98f99d5387,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-5ddbf7d777-z4ljv,Uid:3df1cc7b-c249-4597-b8c9-3a9b4bc48222,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,Creat
edAt:1714069994036774740,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-z4ljv,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3df1cc7b-c249-4597-b8c9-3a9b4bc48222,pod-template-hash: 5ddbf7d777,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:33:13.420506164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-bw7rc,Uid:5e6ef0c9-2d28-429e-a92f-7bb24314635d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714069993022267202,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-bw7rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,k8s-app: metr
ics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:33:12.336880873Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a825f45bb82f480f19c760f92f5fb3d1cd992a4a2a5607cf40300022a7a04bf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:930ba2a2-a45e-4db3-9e58-f57677e70097,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714069991733706812,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930ba2a2-a45e-4db3-9e58-f57677e70097,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\
"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-25T18:33:11.114592001Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c6282053a094a8dd1a76c99595926343e07c5331a83796e173f5d3fdaf89494e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-6wpfr,Uid:a4f7208b-0870-4a3c-bb2e-e6ad6d87404b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714069986396768520,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-6wpfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4f7208b-0870-4a3c-bb2e-e6ad6d
87404b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:33:06.064741881Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c411906655780331b0753e2372b30e75495c6fd8632c325dc411fb29f55f4e4,Metadata:&PodSandboxMetadata{Name:kube-proxy-rgvqp,Uid:aa79ab2f-3125-426d-a63a-8dba44e5e06c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714069986231271144,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rgvqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa79ab2f-3125-426d-a63a-8dba44e5e06c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:33:05.295414341Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:05fdcdfc675f3db365c6e01088655c9ffc8b307f104d0e356bd3034d2a6c2397,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addo
ns-477322,Uid:6ad0cd299b604c07a812a0bc88262082,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714069966123246778,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad0cd299b604c07a812a0bc88262082,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6ad0cd299b604c07a812a0bc88262082,kubernetes.io/config.seen: 2024-04-25T18:32:45.621260634Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c9baef8b5a1b164f4c9c26b432297e34f97ec6569ead5e5a61f84c686cace732,Metadata:&PodSandboxMetadata{Name:etcd-addons-477322,Uid:f9ea0a35cb7ac41978bfcc3c445f98ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714069966113181781,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-477322,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f9ea0a35cb7ac41978bfcc3c445f98ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.239:2379,kubernetes.io/config.hash: f9ea0a35cb7ac41978bfcc3c445f98ec,kubernetes.io/config.seen: 2024-04-25T18:32:45.621262870Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:51bd1af867d66ae37df43e25a0d4fa0940a5273537029b7bbc608342f253ffc6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-477322,Uid:eb1f6a44bb1fb2be1ae94c311e3fa409,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714069966083905292,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1f6a44bb1fb2be1ae94c311e3fa409,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: eb1f6a44bb1fb2be1ae94c311e3fa409,kubernetes.io/config.seen: 2024-04-25T18:32:45.621261896Z,
kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9fc9b6d3e29c535836c0dabd618a8f703355936625aa638d0e448264019d0a04,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-477322,Uid:de2573bdfcfa3e02e7bc88b90313a5cc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714069966080825403,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de2573bdfcfa3e02e7bc88b90313a5cc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.239:8443,kubernetes.io/config.hash: de2573bdfcfa3e02e7bc88b90313a5cc,kubernetes.io/config.seen: 2024-04-25T18:32:45.621256421Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d73ca588-0fe9-489f-b505-4180c7b1f5a8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.922002022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6897f7b8-6d1f-458a-968e-34573022a183 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.922054575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6897f7b8-6d1f-458a-968e-34573022a183 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.922832225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ccb67610efb69bc365548edb3198a2dff3a42514865ab1033b33e7f7b5c90af,PodSandboxId:7d4bc39231a790c6b454e328ee9ca88553ff3d167528fe0a2baf513490142817,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714070331473706014,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-nstfm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88ef2b0e-e7d8-48d4-b29b-658685abefae,},Annotations:map[string]string{io.kubernetes.container.hash: ba810db,io.kubernetes.container
.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:402c7e90494399d2feeaa235e691145866b2725e37aa478f5804487a743ac56d,PodSandboxId:105e8c1342c557b597d234bbc587695ba49b3c540dbe40ac0c65b9342cca3c2f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714070191079882306,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 174bca0d-e34d-4acf-8cb7-74f929b70346,},Annotations:map[string]string{io.kuberne
tes.container.hash: 73757bdd,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e423af13f38271273791d9ffaaba540df7d18373a078a69cd5a8ffe096ab0c6,PodSandboxId:5e588693571d85f475e2522defcd89fa2b3eb4972947ef0afebf135f7ddc22e2,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714070168169033617,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-4hdvs,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: b0b1c3bf-f2b2-4b6a-ba59-104181e36d01,},Annotations:map[string]string{io.kubernetes.container.hash: c3244dca,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:444cf98d597b26ac307437fc04a6576f39c4ddc200c2eeb2e0444204f26594e7,PodSandboxId:a90eb47d1f5c3908965f516b8db8a75cc1a875de777df4706de32481860f2794,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714070144465611562,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fmcbp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8ed5953a-1f88-4b6d-abba-be0571627016,},Annotations:map[string]string{io.kubernetes.container.hash: dbc4a0ba,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df04bc6e0a8645fec759d27fa1ffcc26a8380f5ad630eeba571a082084dfe0cf,PodSandboxId:82d5a69e4c3c29ae7933af38b110fb706734a0d466fa1fc222a57a98f99d5387,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171407
0052690182505,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-z4ljv,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3df1cc7b-c249-4597-b8c9-3a9b4bc48222,},Annotations:map[string]string{io.kubernetes.container.hash: 27dec842,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c,PodSandboxId:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1714070000274234940,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bw7rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,},Annotations:map[string]string{io.kubernetes.container.hash: 6ea71e24,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a581b2bef974518ff15839d7127b97175c6ca2c11630a8877145f8e707dacfa,PodSandboxId:4a825f45bb82f480f19c760f92f5fb3d1cd992a4a2a5607cf40300022a7a04bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714069995016203233,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 930ba2a2-a45e-4db3-9e58-f57677e70097,},Annotations:map[string]string{io.kubernetes.container.hash: f492499d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b04a27897034cedb321fa5f06387e220bd535ffa851de1660e5098a7206068c5,PodSandboxId:c6282053a094a8dd1a76c99595926343e07c5331a83796e173f5d3fdaf89494e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714069989920485509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6wpfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4f7208b-0870-4a3c-bb2e-e6ad6d87404b,},Annotations:map[string]string{io.kubernetes.container.hash: 7416a455,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d13c42367e56a88594713117ba450b13bde86d14fdd1911ed31bcae79c6255,PodSandb
oxId:3c411906655780331b0753e2372b30e75495c6fd8632c325dc411fb29f55f4e4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714069986854907049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgvqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa79ab2f-3125-426d-a63a-8dba44e5e06c,},Annotations:map[string]string{io.kubernetes.container.hash: a2478d34,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba098b391087ab69c154d60e93cdbca9709dae3e860e358078373ea832309cad,PodSandboxId:c9baef8b5a1b164f4c9c26b43229
7e34f97ec6569ead5e5a61f84c686cace732,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714069966399786701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9ea0a35cb7ac41978bfcc3c445f98ec,},Annotations:map[string]string{io.kubernetes.container.hash: 5bfc3a10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9db646bf6dbf0e9d7d21d563363f55428cc69781ff0b871042fc82cd43a56d,PodSandboxId:51bd1af867d66ae37df43e25a0d4fa0940a5273537029b7bbc608342f253ffc6,Metadata:&
ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714069966287069223,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb1f6a44bb1fb2be1ae94c311e3fa409,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcbbc3655cb9eee9c48e5c703032e6c66e0f3c1d8fe46c50b43c2e8e617986f7,PodSandboxId:05fdcdfc675f3db365c6e01088655c9ffc8b307f104d0e356bd3034d2a6c2397,Metadata:&ContainerMetadata
{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714069966335430671,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ad0cd299b604c07a812a0bc88262082,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e91ce0b80f86d5e85292f94da6f1cd5d7db205853dfcfe415aa0059ccb450f83,PodSandboxId:9fc9b6d3e29c535836c0dabd618a8f703355936625aa638d0e448264019d0a04,Metadata:&Contain
erMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714069966258899721,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-477322,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de2573bdfcfa3e02e7bc88b90313a5cc,},Annotations:map[string]string{io.kubernetes.container.hash: c53b7525,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6897f7b8-6d1f-458a-968e-34573022a183 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.926019441Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,},},}" file="otel-collector/interceptors.go:62" id=e5ed4a2e-1785-463c-b9f0-ae0b1fd54be8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.926136352Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-bw7rc,Uid:5e6ef0c9-2d28-429e-a92f-7bb24314635d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714069993022267202,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-bw7rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:33:12.336880873Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e5ed4a2e-1785-463c-b9f0-ae0b1fd54be8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.926651670Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,Verbose:false,}" file="otel-collector/interceptors.go:62" id=45618e4f-fd52-4373-9602-5b28537255cf name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.926852551Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-bw7rc,Uid:5e6ef0c9-2d28-429e-a92f-7bb24314635d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714069993022267202,Network:&PodSandboxNetworkStatus{Ip:10.244.0.9,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-bw7rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:33:12.336880873Z,kubernetes.io/config.sou
rce: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=45618e4f-fd52-4373-9602-5b28537255cf name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.927493675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,},},}" file="otel-collector/interceptors.go:62" id=b425dda1-ca88-4c5c-be64-5c1c5f03b919 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.927541565Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b425dda1-ca88-4c5c-be64-5c1c5f03b919 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.927604038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c,PodSandboxId:8a59b60c33812e4e8cfcd1a0297b8e50aec4bdac0f47c2adb0ab56144737f7bd,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1714070000274234940,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bw7rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,},Annotations:map[string]string{io.kubernetes.container.hash: 6ea71e24,io.kubern
etes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b425dda1-ca88-4c5c-be64-5c1c5f03b919 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.927962210Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=0d38702b-2926-43e1-b2d1-1c98a0fb40b0 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 25 18:41:44 addons-477322 crio[681]: time="2024-04-25 18:41:44.928136540Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:1388e5efb882a43601a0e5d24afcd463d23544ed351498518d966d6672b5b63c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1714070000363435081,StartedAt:1714070000462434686,FinishedAt:1714070504831058181,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-bw7rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6ef0c9-2d28-429e-a92f-7bb24314635d,},Annotations:map[string]string{io.kubernetes.container.hash: 6ea71e24
,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/var/lib/kubelet/pods/5e6ef0c9-2d28-429e-a92f-7bb24314635d/volumes/kubernetes.io~empty-dir/tmp-dir,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5e6ef0c9-2d28-429e-a92f-7bb24314635d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5e6ef0c9-2d28-429e-a92f-7bb24314635d/containers/metrics-server/8f6bfa0a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_P
RIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5e6ef0c9-2d28-429e-a92f-7bb24314635d/volumes/kubernetes.io~projected/kube-api-access-9llpj,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-bw7rc_5e6ef0c9-2d28-429e-a92f-7bb24314635d/metrics-server/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:948,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=0d38702b-2926-43e1-b2d1-1c98a0fb40b0 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5ccb67610efb6       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 2 minutes ago       Running             hello-world-app           0                   7d4bc39231a79       hello-world-app-86c47465fc-nstfm
	402c7e9049439       docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88                         5 minutes ago       Running             nginx                     0                   105e8c1342c55       nginx
	7e423af13f382       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                   5 minutes ago       Running             headlamp                  0                   5e588693571d8       headlamp-7559bf459f-4hdvs
	444cf98d597b2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   a90eb47d1f5c3       gcp-auth-5db96cd9b4-fmcbp
	df04bc6e0a864       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         7 minutes ago       Running             yakd                      0                   82d5a69e4c3c2       yakd-dashboard-5ddbf7d777-z4ljv
	1388e5efb882a       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   8 minutes ago       Exited              metrics-server            0                   8a59b60c33812       metrics-server-c59844bb4-bw7rc
	8a581b2bef974       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   4a825f45bb82f       storage-provisioner
	b04a27897034c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   c6282053a094a       coredns-7db6d8ff4d-6wpfr
	e5d13c42367e5       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                        8 minutes ago       Running             kube-proxy                0                   3c41190665578       kube-proxy-rgvqp
	ba098b391087a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   c9baef8b5a1b1       etcd-addons-477322
	dcbbc3655cb9e       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                        8 minutes ago       Running             kube-controller-manager   0                   05fdcdfc675f3       kube-controller-manager-addons-477322
	7c9db646bf6db       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                        8 minutes ago       Running             kube-scheduler            0                   51bd1af867d66       kube-scheduler-addons-477322
	e91ce0b80f86d       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                        8 minutes ago       Running             kube-apiserver            0                   9fc9b6d3e29c5       kube-apiserver-addons-477322
	
	
	==> coredns [b04a27897034cedb321fa5f06387e220bd535ffa851de1660e5098a7206068c5] <==
	[INFO] 10.244.0.7:44463 - 66 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000804485s
	[INFO] 10.244.0.7:52059 - 303 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000174658s
	[INFO] 10.244.0.7:52059 - 27177 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091901s
	[INFO] 10.244.0.7:37509 - 42635 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093057s
	[INFO] 10.244.0.7:37509 - 45449 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096511s
	[INFO] 10.244.0.7:46568 - 34014 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000110061s
	[INFO] 10.244.0.7:46568 - 20703 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000177556s
	[INFO] 10.244.0.7:36374 - 64719 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000125838s
	[INFO] 10.244.0.7:36374 - 11722 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074259s
	[INFO] 10.244.0.7:47843 - 53516 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078397s
	[INFO] 10.244.0.7:47843 - 5170 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030202s
	[INFO] 10.244.0.7:55495 - 33669 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108891s
	[INFO] 10.244.0.7:55495 - 52103 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061179s
	[INFO] 10.244.0.7:54500 - 40811 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125009s
	[INFO] 10.244.0.7:54500 - 31850 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093013s
	[INFO] 10.244.0.22:34137 - 10931 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000554739s
	[INFO] 10.244.0.22:56080 - 9082 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00066313s
	[INFO] 10.244.0.22:40405 - 1108 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160768s
	[INFO] 10.244.0.22:35532 - 56427 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111907s
	[INFO] 10.244.0.22:40357 - 8383 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117509s
	[INFO] 10.244.0.22:60494 - 7978 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121966s
	[INFO] 10.244.0.22:36247 - 26665 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001445239s
	[INFO] 10.244.0.22:58557 - 39543 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001635333s
	[INFO] 10.244.0.25:50801 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0002597s
	[INFO] 10.244.0.25:43440 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000211661s
	
	
	==> describe nodes <==
	Name:               addons-477322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-477322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=addons-477322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T18_32_52_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-477322
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:32:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-477322
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 18:41:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 18:38:59 +0000   Thu, 25 Apr 2024 18:32:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 18:38:59 +0000   Thu, 25 Apr 2024 18:32:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 18:38:59 +0000   Thu, 25 Apr 2024 18:32:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 18:38:59 +0000   Thu, 25 Apr 2024 18:32:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    addons-477322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 beb887a3c48d42baab55e27f20912f96
	  System UUID:                beb887a3-c48d-42ba-ab55-e27f20912f96
	  Boot ID:                    9e9616d2-9083-4750-bf85-df17f463b7e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-nstfm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m19s
	  gcp-auth                    gcp-auth-5db96cd9b4-fmcbp                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  headlamp                    headlamp-7559bf459f-4hdvs                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m45s
	  kube-system                 coredns-7db6d8ff4d-6wpfr                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m40s
	  kube-system                 etcd-addons-477322                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m55s
	  kube-system                 kube-apiserver-addons-477322             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m54s
	  kube-system                 kube-controller-manager-addons-477322    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m54s
	  kube-system                 kube-proxy-rgvqp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 kube-scheduler-addons-477322             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m54s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-z4ljv          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     8m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m37s  kube-proxy       
	  Normal  Starting                 8m54s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m54s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m54s  kubelet          Node addons-477322 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m54s  kubelet          Node addons-477322 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m54s  kubelet          Node addons-477322 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m53s  kubelet          Node addons-477322 status is now: NodeReady
	  Normal  RegisteredNode           8m41s  node-controller  Node addons-477322 event: Registered Node addons-477322 in Controller
	
	
	==> dmesg <==
	[  +0.155337] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.050926] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.157085] kauditd_printk_skb: 126 callbacks suppressed
	[  +6.969190] kauditd_printk_skb: 109 callbacks suppressed
	[ +13.238306] kauditd_printk_skb: 23 callbacks suppressed
	[ +22.635649] kauditd_printk_skb: 2 callbacks suppressed
	[Apr25 18:34] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.113415] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.556885] kauditd_printk_skb: 59 callbacks suppressed
	[  +6.234245] kauditd_printk_skb: 21 callbacks suppressed
	[Apr25 18:35] kauditd_printk_skb: 24 callbacks suppressed
	[ +15.417081] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.582374] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.349567] kauditd_printk_skb: 11 callbacks suppressed
	[ +12.406275] kauditd_printk_skb: 32 callbacks suppressed
	[Apr25 18:36] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.001322] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.036832] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.243491] kauditd_printk_skb: 27 callbacks suppressed
	[  +9.014676] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.527007] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.889041] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.746476] kauditd_printk_skb: 33 callbacks suppressed
	[Apr25 18:38] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.159948] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [ba098b391087ab69c154d60e93cdbca9709dae3e860e358078373ea832309cad] <==
	{"level":"warn","ts":"2024-04-25T18:34:11.105712Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.832937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11167"}
	{"level":"info","ts":"2024-04-25T18:34:11.106266Z","caller":"traceutil/trace.go:171","msg":"trace[1045415287] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:979; }","duration":"255.412898ms","start":"2024-04-25T18:34:10.850832Z","end":"2024-04-25T18:34:11.106245Z","steps":["trace[1045415287] 'agreement among raft nodes before linearized reading'  (duration: 254.695501ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:34:28.387147Z","caller":"traceutil/trace.go:171","msg":"trace[1455625901] transaction","detail":"{read_only:false; response_revision:1088; number_of_response:1; }","duration":"240.643015ms","start":"2024-04-25T18:34:28.14647Z","end":"2024-04-25T18:34:28.387113Z","steps":["trace[1455625901] 'process raft request'  (duration: 238.426866ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:34:54.908156Z","caller":"traceutil/trace.go:171","msg":"trace[730924886] linearizableReadLoop","detail":"{readStateIndex:1220; appliedIndex:1219; }","duration":"119.448648ms","start":"2024-04-25T18:34:54.788692Z","end":"2024-04-25T18:34:54.90814Z","steps":["trace[730924886] 'read index received'  (duration: 119.314554ms)","trace[730924886] 'applied index is now lower than readState.Index'  (duration: 133.684µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-25T18:34:54.908577Z","caller":"traceutil/trace.go:171","msg":"trace[41928026] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"297.813998ms","start":"2024-04-25T18:34:54.610751Z","end":"2024-04-25T18:34:54.908565Z","steps":["trace[41928026] 'process raft request'  (duration: 297.300753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:34:54.908891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.174024ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14358"}
	{"level":"info","ts":"2024-04-25T18:34:54.909619Z","caller":"traceutil/trace.go:171","msg":"trace[338300484] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1181; }","duration":"120.953396ms","start":"2024-04-25T18:34:54.788656Z","end":"2024-04-25T18:34:54.909609Z","steps":["trace[338300484] 'agreement among raft nodes before linearized reading'  (duration: 120.121509ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:35:44.017183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.975264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-25T18:35:44.017231Z","caller":"traceutil/trace.go:171","msg":"trace[131467759] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1271; }","duration":"202.044098ms","start":"2024-04-25T18:35:43.815175Z","end":"2024-04-25T18:35:44.017219Z","steps":["trace[131467759] 'range keys from in-memory index tree'  (duration: 201.904708ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:35:44.017245Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.892089ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-04-25T18:35:44.017283Z","caller":"traceutil/trace.go:171","msg":"trace[390698173] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1271; }","duration":"172.066584ms","start":"2024-04-25T18:35:43.845208Z","end":"2024-04-25T18:35:44.017274Z","steps":["trace[390698173] 'range keys from in-memory index tree'  (duration: 171.780329ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:36:06.070473Z","caller":"traceutil/trace.go:171","msg":"trace[382770286] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1467; }","duration":"359.401107ms","start":"2024-04-25T18:36:05.711054Z","end":"2024-04-25T18:36:06.070455Z","steps":["trace[382770286] 'process raft request'  (duration: 359.123114ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:36:06.070718Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T18:36:05.711042Z","time spent":"359.494666ms","remote":"127.0.0.1:33370","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":46,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/controllers/kube-system/registry\" mod_revision:942 > success:<request_delete_range:<key:\"/registry/controllers/kube-system/registry\" > > failure:<request_range:<key:\"/registry/controllers/kube-system/registry\" > >"}
	{"level":"info","ts":"2024-04-25T18:36:06.071137Z","caller":"traceutil/trace.go:171","msg":"trace[835363111] linearizableReadLoop","detail":"{readStateIndex:1525; appliedIndex:1525; }","duration":"299.549111ms","start":"2024-04-25T18:36:05.771579Z","end":"2024-04-25T18:36:06.071128Z","steps":["trace[835363111] 'read index received'  (duration: 299.54581ms)","trace[835363111] 'applied index is now lower than readState.Index'  (duration: 2.799µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-25T18:36:06.071256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"299.669087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6768"}
	{"level":"info","ts":"2024-04-25T18:36:06.071275Z","caller":"traceutil/trace.go:171","msg":"trace[2128029028] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1467; }","duration":"299.69434ms","start":"2024-04-25T18:36:05.771575Z","end":"2024-04-25T18:36:06.071269Z","steps":["trace[2128029028] 'agreement among raft nodes before linearized reading'  (duration: 299.601287ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:36:06.083433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.764629ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6768"}
	{"level":"info","ts":"2024-04-25T18:36:06.083491Z","caller":"traceutil/trace.go:171","msg":"trace[1694448508] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1467; }","duration":"274.841639ms","start":"2024-04-25T18:36:05.80864Z","end":"2024-04-25T18:36:06.083481Z","steps":["trace[1694448508] 'agreement among raft nodes before linearized reading'  (duration: 274.733399ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:36:06.083612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.37004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-04-25T18:36:06.08363Z","caller":"traceutil/trace.go:171","msg":"trace[1572570004] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1467; }","duration":"286.405391ms","start":"2024-04-25T18:36:05.797217Z","end":"2024-04-25T18:36:06.083622Z","steps":["trace[1572570004] 'agreement among raft nodes before linearized reading'  (duration: 286.360002ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T18:36:06.083091Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.103288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-25T18:36:06.084062Z","caller":"traceutil/trace.go:171","msg":"trace[650421928] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1467; }","duration":"269.099163ms","start":"2024-04-25T18:36:05.814955Z","end":"2024-04-25T18:36:06.084055Z","steps":["trace[650421928] 'agreement among raft nodes before linearized reading'  (duration: 268.108802ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:36:18.858389Z","caller":"traceutil/trace.go:171","msg":"trace[471666342] transaction","detail":"{read_only:false; response_revision:1536; number_of_response:1; }","duration":"124.188714ms","start":"2024-04-25T18:36:18.734083Z","end":"2024-04-25T18:36:18.858272Z","steps":["trace[471666342] 'process raft request'  (duration: 123.755033ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:36:19.581717Z","caller":"traceutil/trace.go:171","msg":"trace[2130447041] transaction","detail":"{read_only:false; response_revision:1537; number_of_response:1; }","duration":"177.763667ms","start":"2024-04-25T18:36:19.403936Z","end":"2024-04-25T18:36:19.5817Z","steps":["trace[2130447041] 'process raft request'  (duration: 177.193836ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T18:36:25.532516Z","caller":"traceutil/trace.go:171","msg":"trace[1875476033] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1586; }","duration":"224.662333ms","start":"2024-04-25T18:36:25.307838Z","end":"2024-04-25T18:36:25.532501Z","steps":["trace[1875476033] 'process raft request'  (duration: 224.387177ms)"],"step_count":1}
	
	
	==> gcp-auth [444cf98d597b26ac307437fc04a6576f39c4ddc200c2eeb2e0444204f26594e7] <==
	2024/04/25 18:35:51 Ready to write response ...
	2024/04/25 18:35:51 Ready to marshal response ...
	2024/04/25 18:35:51 Ready to write response ...
	2024/04/25 18:35:51 Ready to marshal response ...
	2024/04/25 18:35:51 Ready to write response ...
	2024/04/25 18:35:56 Ready to marshal response ...
	2024/04/25 18:35:56 Ready to write response ...
	2024/04/25 18:35:58 Ready to marshal response ...
	2024/04/25 18:35:58 Ready to write response ...
	2024/04/25 18:36:00 Ready to marshal response ...
	2024/04/25 18:36:00 Ready to write response ...
	2024/04/25 18:36:00 Ready to marshal response ...
	2024/04/25 18:36:00 Ready to write response ...
	2024/04/25 18:36:00 Ready to marshal response ...
	2024/04/25 18:36:00 Ready to write response ...
	2024/04/25 18:36:04 Ready to marshal response ...
	2024/04/25 18:36:04 Ready to write response ...
	2024/04/25 18:36:14 Ready to marshal response ...
	2024/04/25 18:36:14 Ready to write response ...
	2024/04/25 18:36:26 Ready to marshal response ...
	2024/04/25 18:36:26 Ready to write response ...
	2024/04/25 18:36:31 Ready to marshal response ...
	2024/04/25 18:36:31 Ready to write response ...
	2024/04/25 18:38:47 Ready to marshal response ...
	2024/04/25 18:38:47 Ready to write response ...
	
	
	==> kernel <==
	 18:41:45 up 9 min,  0 users,  load average: 0.26, 0.88, 0.68
	Linux addons-477322 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e91ce0b80f86d5e85292f94da6f1cd5d7db205853dfcfe415aa0059ccb450f83] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0425 18:34:23.011567       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.31.115:443: connect: connection refused
	E0425 18:34:23.012108       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.31.115:443: connect: connection refused
	E0425 18:34:23.024482       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.31.115:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.31.115:443: connect: connection refused
	I0425 18:34:23.177114       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0425 18:35:56.164989       1 conn.go:339] Error on socket receive: read tcp 192.168.39.239:8443->192.168.39.1:34768: use of closed network connection
	I0425 18:36:00.434030       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.161.99"}
	I0425 18:36:20.353904       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0425 18:36:21.407691       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0425 18:36:26.150834       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0425 18:36:26.360862       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.5.208"}
	I0425 18:36:26.837956       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0425 18:36:30.389622       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0425 18:36:41.286072       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0425 18:36:48.080753       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0425 18:36:48.080849       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0425 18:36:48.139611       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0425 18:36:48.140163       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0425 18:36:48.203763       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0425 18:36:48.203970       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0425 18:36:48.253927       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	W0425 18:36:49.204466       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0425 18:36:49.256412       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0425 18:36:49.256529       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0425 18:38:47.828545       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.4.205"}
	
	
	==> kube-controller-manager [dcbbc3655cb9eee9c48e5c703032e6c66e0f3c1d8fe46c50b43c2e8e617986f7] <==
	W0425 18:39:47.533652       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:39:47.533716       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:39:54.135921       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:39:54.135984       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:40:07.572848       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:40:07.573025       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:40:21.858554       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:40:21.858849       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:40:22.113085       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:40:22.113175       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:40:40.752597       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:40:40.752721       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:40:53.937841       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:40:53.937908       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:40:55.978969       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:40:55.979284       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:41:02.132260       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:41:02.132470       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:41:32.262991       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:41:32.263203       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:41:36.402683       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:41:36.402825       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0425 18:41:41.138670       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0425 18:41:41.138772       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0425 18:41:43.702364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="13.364µs"
	
	
	==> kube-proxy [e5d13c42367e56a88594713117ba450b13bde86d14fdd1911ed31bcae79c6255] <==
	I0425 18:33:07.631973       1 server_linux.go:69] "Using iptables proxy"
	I0425 18:33:07.658383       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.239"]
	I0425 18:33:07.741876       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 18:33:07.741984       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 18:33:07.742002       1 server_linux.go:165] "Using iptables Proxier"
	I0425 18:33:07.758580       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 18:33:07.758786       1 server.go:872] "Version info" version="v1.30.0"
	I0425 18:33:07.758798       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 18:33:07.768541       1 config.go:192] "Starting service config controller"
	I0425 18:33:07.768582       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 18:33:07.768606       1 config.go:101] "Starting endpoint slice config controller"
	I0425 18:33:07.768610       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 18:33:07.769016       1 config.go:319] "Starting node config controller"
	I0425 18:33:07.769059       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 18:33:07.868799       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 18:33:07.868880       1 shared_informer.go:320] Caches are synced for service config
	I0425 18:33:07.869154       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7c9db646bf6dbf0e9d7d21d563363f55428cc69781ff0b871042fc82cd43a56d] <==
	W0425 18:32:49.124061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 18:32:49.124100       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 18:32:49.124161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 18:32:49.124200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0425 18:32:49.124270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 18:32:49.124383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 18:32:49.126463       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0425 18:32:49.126690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0425 18:32:49.940179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0425 18:32:49.940389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0425 18:32:50.003442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 18:32:50.003498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 18:32:50.085167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 18:32:50.085257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0425 18:32:50.260484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 18:32:50.260626       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 18:32:50.312940       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 18:32:50.313017       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 18:32:50.317034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 18:32:50.317086       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 18:32:50.380256       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 18:32:50.380377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0425 18:32:50.655234       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 18:32:50.655415       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0425 18:32:52.303610       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.197370    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95"} err="failed to get container status \"9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95\": rpc error: code = NotFound desc = could not find container \"9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95\": container with ID starting with 9cfef00b751a82b6e43c9de4971d5a26f78948c3789773d9041e3924a8449c95 not found: ID does not exist"
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.312718    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc9858ea-8b29-48d6-9a91-d584980367d0-webhook-cert\") pod \"fc9858ea-8b29-48d6-9a91-d584980367d0\" (UID: \"fc9858ea-8b29-48d6-9a91-d584980367d0\") "
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.312767    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w46jc\" (UniqueName: \"kubernetes.io/projected/fc9858ea-8b29-48d6-9a91-d584980367d0-kube-api-access-w46jc\") pod \"fc9858ea-8b29-48d6-9a91-d584980367d0\" (UID: \"fc9858ea-8b29-48d6-9a91-d584980367d0\") "
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.317614    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9858ea-8b29-48d6-9a91-d584980367d0-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "fc9858ea-8b29-48d6-9a91-d584980367d0" (UID: "fc9858ea-8b29-48d6-9a91-d584980367d0"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.319484    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9858ea-8b29-48d6-9a91-d584980367d0-kube-api-access-w46jc" (OuterVolumeSpecName: "kube-api-access-w46jc") pod "fc9858ea-8b29-48d6-9a91-d584980367d0" (UID: "fc9858ea-8b29-48d6-9a91-d584980367d0"). InnerVolumeSpecName "kube-api-access-w46jc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.413591    1283 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w46jc\" (UniqueName: \"kubernetes.io/projected/fc9858ea-8b29-48d6-9a91-d584980367d0-kube-api-access-w46jc\") on node \"addons-477322\" DevicePath \"\""
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.413624    1283 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc9858ea-8b29-48d6-9a91-d584980367d0-webhook-cert\") on node \"addons-477322\" DevicePath \"\""
	Apr 25 18:38:53 addons-477322 kubelet[1283]: I0425 18:38:53.567056    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9858ea-8b29-48d6-9a91-d584980367d0" path="/var/lib/kubelet/pods/fc9858ea-8b29-48d6-9a91-d584980367d0/volumes"
	Apr 25 18:39:51 addons-477322 kubelet[1283]: E0425 18:39:51.582710    1283 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:39:51 addons-477322 kubelet[1283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:39:51 addons-477322 kubelet[1283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:39:51 addons-477322 kubelet[1283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:39:51 addons-477322 kubelet[1283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 18:40:51 addons-477322 kubelet[1283]: E0425 18:40:51.581974    1283 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:40:51 addons-477322 kubelet[1283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:40:51 addons-477322 kubelet[1283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:40:51 addons-477322 kubelet[1283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:40:51 addons-477322 kubelet[1283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 18:41:43 addons-477322 kubelet[1283]: I0425 18:41:43.726573    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-nstfm" podStartSLOduration=173.577388289 podStartE2EDuration="2m56.726545065s" podCreationTimestamp="2024-04-25 18:38:47 +0000 UTC" firstStartedPulling="2024-04-25 18:38:48.30400401 +0000 UTC m=+356.916834217" lastFinishedPulling="2024-04-25 18:38:51.453160784 +0000 UTC m=+360.065990993" observedRunningTime="2024-04-25 18:38:52.183842173 +0000 UTC m=+360.796672402" watchObservedRunningTime="2024-04-25 18:41:43.726545065 +0000 UTC m=+532.339375272"
	Apr 25 18:41:45 addons-477322 kubelet[1283]: I0425 18:41:45.134828    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9llpj\" (UniqueName: \"kubernetes.io/projected/5e6ef0c9-2d28-429e-a92f-7bb24314635d-kube-api-access-9llpj\") pod \"5e6ef0c9-2d28-429e-a92f-7bb24314635d\" (UID: \"5e6ef0c9-2d28-429e-a92f-7bb24314635d\") "
	Apr 25 18:41:45 addons-477322 kubelet[1283]: I0425 18:41:45.134870    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e6ef0c9-2d28-429e-a92f-7bb24314635d-tmp-dir\") pod \"5e6ef0c9-2d28-429e-a92f-7bb24314635d\" (UID: \"5e6ef0c9-2d28-429e-a92f-7bb24314635d\") "
	Apr 25 18:41:45 addons-477322 kubelet[1283]: I0425 18:41:45.135214    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6ef0c9-2d28-429e-a92f-7bb24314635d-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "5e6ef0c9-2d28-429e-a92f-7bb24314635d" (UID: "5e6ef0c9-2d28-429e-a92f-7bb24314635d"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Apr 25 18:41:45 addons-477322 kubelet[1283]: I0425 18:41:45.150972    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6ef0c9-2d28-429e-a92f-7bb24314635d-kube-api-access-9llpj" (OuterVolumeSpecName: "kube-api-access-9llpj") pod "5e6ef0c9-2d28-429e-a92f-7bb24314635d" (UID: "5e6ef0c9-2d28-429e-a92f-7bb24314635d"). InnerVolumeSpecName "kube-api-access-9llpj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 25 18:41:45 addons-477322 kubelet[1283]: I0425 18:41:45.236935    1283 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9llpj\" (UniqueName: \"kubernetes.io/projected/5e6ef0c9-2d28-429e-a92f-7bb24314635d-kube-api-access-9llpj\") on node \"addons-477322\" DevicePath \"\""
	Apr 25 18:41:45 addons-477322 kubelet[1283]: I0425 18:41:45.236972    1283 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e6ef0c9-2d28-429e-a92f-7bb24314635d-tmp-dir\") on node \"addons-477322\" DevicePath \"\""
	
	
	==> storage-provisioner [8a581b2bef974518ff15839d7127b97175c6ca2c11630a8877145f8e707dacfa] <==
	I0425 18:33:16.266399       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0425 18:33:16.293200       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0425 18:33:16.297471       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0425 18:33:16.365257       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0425 18:33:16.376924       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f4bdbf17-ef67-4f87-b6e9-7a526d889302", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-477322_55436776-d203-4ba2-8edd-415dd7c1f311 became leader
	I0425 18:33:16.379556       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-477322_55436776-d203-4ba2-8edd-415dd7c1f311!
	I0425 18:33:16.484580       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-477322_55436776-d203-4ba2-8edd-415dd7c1f311!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-477322 -n addons-477322
helpers_test.go:261: (dbg) Run:  kubectl --context addons-477322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (339.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-477322
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-477322: exit status 82 (2m0.475357217s)

                                                
                                                
-- stdout --
	* Stopping node "addons-477322"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-477322" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-477322
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-477322: exit status 11 (21.558111902s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-477322" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-477322
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-477322: exit status 11 (6.14401402s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-477322" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-477322
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-477322: exit status 11 (6.142947672s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-477322" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-117423 ssh "findmnt -T" /mount1: exit status 1 (326.924965ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-117423 --kill=true
functional_test_mount_test.go:362: 1s TIMEOUT: Process 23739 is still running
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: read stdout failed: read |0: file already closed
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount1 --alsologtostderr -v=1] stdout:
functional_test_mount_test.go:313: read stderr failed: read |0: file already closed
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount1 --alsologtostderr -v=1] stderr:
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: read stdout failed: read |0: file already closed
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount2 --alsologtostderr -v=1] stdout:
functional_test_mount_test.go:313: read stderr failed: read |0: file already closed
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount2 --alsologtostderr -v=1] stderr:
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount3 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount3 --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001 into VM as /mount3 ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.39.1:39177
* Userspace file server: ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001 to /mount3

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
functional_test_mount_test.go:313: (dbg) [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001:/mount3 --alsologtostderr -v=1] stderr:
I0425 18:49:18.731640   23740 out.go:291] Setting OutFile to fd 1 ...
I0425 18:49:18.731847   23740 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:18.731867   23740 out.go:304] Setting ErrFile to fd 2...
I0425 18:49:18.731876   23740 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:18.732047   23740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
I0425 18:49:18.732268   23740 mustload.go:65] Loading cluster: functional-117423
I0425 18:49:18.732672   23740 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0425 18:49:18.733051   23740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:18.733100   23740 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:18.754545   23740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
I0425 18:49:18.755129   23740 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:18.755819   23740 main.go:141] libmachine: Using API Version  1
I0425 18:49:18.755846   23740 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:18.756227   23740 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:18.756400   23740 main.go:141] libmachine: (functional-117423) Calling .GetState
I0425 18:49:18.758190   23740 host.go:66] Checking if "functional-117423" exists ...
I0425 18:49:18.758629   23740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:18.758654   23740 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:18.775078   23740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
I0425 18:49:18.775428   23740 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:18.775792   23740 main.go:141] libmachine: Using API Version  1
I0425 18:49:18.775804   23740 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:18.776037   23740 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:18.776153   23740 main.go:141] libmachine: (functional-117423) Calling .DriverName
I0425 18:49:18.776225   23740 main.go:141] libmachine: (functional-117423) Calling .DriverName
I0425 18:49:18.776298   23740 main.go:141] libmachine: (functional-117423) Calling .GetIP
I0425 18:49:18.779099   23740 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:18.779511   23740 main.go:141] libmachine: (functional-117423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:37:c2", ip: ""} in network mk-functional-117423: {Iface:virbr1 ExpiryTime:2024-04-25 19:45:54 +0000 UTC Type:0 Mac:52:54:00:90:37:c2 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-117423 Clientid:01:52:54:00:90:37:c2}
I0425 18:49:18.779535   23740 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined IP address 192.168.39.139 and MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:18.780061   23740 main.go:141] libmachine: (functional-117423) Calling .DriverName
I0425 18:49:18.782765   23740 out.go:177] * Mounting host path /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001 into VM as /mount3 ...
I0425 18:49:18.784429   23740 out.go:177]   - Mount type:   9p
I0425 18:49:18.785713   23740 out.go:177]   - User ID:      docker
I0425 18:49:18.787190   23740 out.go:177]   - Group ID:     docker
I0425 18:49:18.789118   23740 out.go:177]   - Version:      9p2000.L
I0425 18:49:18.790741   23740 out.go:177]   - Message Size: 262144
I0425 18:49:18.792462   23740 out.go:177]   - Options:      map[]
I0425 18:49:18.794484   23740 out.go:177]   - Bind Address: 192.168.39.1:39177
I0425 18:49:18.796396   23740 out.go:177] * Userspace file server: 
I0425 18:49:18.796661   23740 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount3 | grep /mount3)" != "x" ] && sudo umount -f /mount3 || echo "
I0425 18:49:18.798219   23740 main.go:141] libmachine: (functional-117423) Calling .GetSSHHostname
I0425 18:49:18.801049   23740 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:18.801414   23740 main.go:141] libmachine: (functional-117423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:37:c2", ip: ""} in network mk-functional-117423: {Iface:virbr1 ExpiryTime:2024-04-25 19:45:54 +0000 UTC Type:0 Mac:52:54:00:90:37:c2 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-117423 Clientid:01:52:54:00:90:37:c2}
I0425 18:49:18.801439   23740 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined IP address 192.168.39.139 and MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:18.801580   23740 main.go:141] libmachine: (functional-117423) Calling .GetSSHPort
I0425 18:49:18.801729   23740 main.go:141] libmachine: (functional-117423) Calling .GetSSHKeyPath
I0425 18:49:18.801868   23740 main.go:141] libmachine: (functional-117423) Calling .GetSSHUsername
I0425 18:49:18.801978   23740 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/functional-117423/id_rsa Username:docker}
I0425 18:49:18.979529   23740 mount.go:180] unmount for /mount3 ran successfully
I0425 18:49:18.979555   23740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount3"
I0425 18:49:19.023459   23740 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=39177,trans=tcp,version=9p2000.L 192.168.39.1 /mount3"
I0425 18:49:19.063360   23740 main.go:125] stdlog: ufs.go:141 connected
I0425 18:49:19.067791   23740 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.139:48882 Tversion tag 65535 msize 65536 version '9P2000.L'
I0425 18:49:19.067863   23740 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.139:48882 Rversion tag 65535 msize 65536 version '9P2000'
I0425 18:49:19.069204   23740 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.139:48882 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0425 18:49:19.069282   23740 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.139:48882 Rattach tag 0 aqid (20fa08f 1697322c 'd')
I0425 18:49:19.069578   23740 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.139:48882 Tstat tag 0 fid 0
I0425 18:49:19.069696   23740 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.139:48882 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08f 1697322c 'd') m d775 at 0 mt 1714070958 l 4096 t 0 d 0 ext )
I0425 18:49:19.105328   23740 main.go:125] stdlog: ufs.go:141 connected
I0425 18:49:19.105444   23740 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.139:48896 Tversion tag 65535 msize 65536 version '9P2000.L'
I0425 18:49:19.105481   23740 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.139:48896 Rversion tag 65535 msize 65536 version '9P2000'
I0425 18:49:19.105939   23740 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.139:48896 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0425 18:49:19.105998   23740 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.139:48896 Rattach tag 0 aqid (20fa08f 1697322c 'd')
I0425 18:49:19.107034   23740 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.139:48896 Tstat tag 0 fid 0
I0425 18:49:19.107193   23740 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.139:48896 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08f 1697322c 'd') m d775 at 0 mt 1714070958 l 4096 t 0 d 0 ext )
I0425 18:49:19.107721   23740 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/.mount-process: {Name:mk7e058941830309349d6dfe259d2ab2bca13cbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0425 18:49:19.107820   23740 mount.go:105] mount successful: ""
I0425 18:49:19.109459   23740 out.go:177] * Successfully mounted /tmp/TestFunctionalparallelMountCmdVerifyCleanup2250523531/001 to /mount3
I0425 18:49:19.110708   23740 out.go:177] 
I0425 18:49:19.111983   23740 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
--- FAIL: TestFunctional/parallel/MountCmd/VerifyCleanup (3.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 node stop m02 -v=7 --alsologtostderr
E0425 18:55:45.439148   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:56:20.172129   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912667 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.505361518s)

                                                
                                                
-- stdout --
	* Stopping node "ha-912667-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:55:16.962001   28522 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:55:16.962286   28522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:55:16.962296   28522 out.go:304] Setting ErrFile to fd 2...
	I0425 18:55:16.962300   28522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:55:16.962486   28522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:55:16.962718   28522 mustload.go:65] Loading cluster: ha-912667
	I0425 18:55:16.963138   28522 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:55:16.963152   28522 stop.go:39] StopHost: ha-912667-m02
	I0425 18:55:16.963502   28522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:55:16.963540   28522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:55:16.979005   28522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0425 18:55:16.979549   28522 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:55:16.980159   28522 main.go:141] libmachine: Using API Version  1
	I0425 18:55:16.980183   28522 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:55:16.980522   28522 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:55:16.983387   28522 out.go:177] * Stopping node "ha-912667-m02"  ...
	I0425 18:55:16.985085   28522 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0425 18:55:16.985128   28522 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:55:16.985366   28522 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0425 18:55:16.985401   28522 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:55:16.988476   28522 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:55:16.988924   28522 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:55:16.988963   28522 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:55:16.989129   28522 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:55:16.989332   28522 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:55:16.989496   28522 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:55:16.989680   28522 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	I0425 18:55:17.080641   28522 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0425 18:55:17.146239   28522 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0425 18:55:17.204659   28522 main.go:141] libmachine: Stopping "ha-912667-m02"...
	I0425 18:55:17.204681   28522 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 18:55:17.206497   28522 main.go:141] libmachine: (ha-912667-m02) Calling .Stop
	I0425 18:55:17.210279   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 0/120
	I0425 18:55:18.211952   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 1/120
	I0425 18:55:19.213546   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 2/120
	I0425 18:55:20.215087   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 3/120
	I0425 18:55:21.217059   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 4/120
	I0425 18:55:22.219399   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 5/120
	I0425 18:55:23.221006   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 6/120
	I0425 18:55:24.222335   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 7/120
	I0425 18:55:25.223905   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 8/120
	I0425 18:55:26.225482   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 9/120
	I0425 18:55:27.227689   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 10/120
	I0425 18:55:28.229204   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 11/120
	I0425 18:55:29.230484   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 12/120
	I0425 18:55:30.232843   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 13/120
	I0425 18:55:31.234174   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 14/120
	I0425 18:55:32.236361   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 15/120
	I0425 18:55:33.237807   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 16/120
	I0425 18:55:34.240123   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 17/120
	I0425 18:55:35.241711   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 18/120
	I0425 18:55:36.242864   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 19/120
	I0425 18:55:37.244695   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 20/120
	I0425 18:55:38.245986   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 21/120
	I0425 18:55:39.247378   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 22/120
	I0425 18:55:40.248730   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 23/120
	I0425 18:55:41.250118   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 24/120
	I0425 18:55:42.252140   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 25/120
	I0425 18:55:43.253466   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 26/120
	I0425 18:55:44.255856   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 27/120
	I0425 18:55:45.257516   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 28/120
	I0425 18:55:46.259224   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 29/120
	I0425 18:55:47.260494   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 30/120
	I0425 18:55:48.262245   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 31/120
	I0425 18:55:49.263807   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 32/120
	I0425 18:55:50.265291   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 33/120
	I0425 18:55:51.267119   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 34/120
	I0425 18:55:52.269007   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 35/120
	I0425 18:55:53.270536   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 36/120
	I0425 18:55:54.272330   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 37/120
	I0425 18:55:55.273673   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 38/120
	I0425 18:55:56.275968   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 39/120
	I0425 18:55:57.278140   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 40/120
	I0425 18:55:58.280008   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 41/120
	I0425 18:55:59.281671   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 42/120
	I0425 18:56:00.282950   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 43/120
	I0425 18:56:01.284643   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 44/120
	I0425 18:56:02.286650   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 45/120
	I0425 18:56:03.288852   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 46/120
	I0425 18:56:04.290321   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 47/120
	I0425 18:56:05.291624   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 48/120
	I0425 18:56:06.293243   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 49/120
	I0425 18:56:07.295341   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 50/120
	I0425 18:56:08.296667   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 51/120
	I0425 18:56:09.298524   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 52/120
	I0425 18:56:10.300679   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 53/120
	I0425 18:56:11.302108   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 54/120
	I0425 18:56:12.303914   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 55/120
	I0425 18:56:13.305211   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 56/120
	I0425 18:56:14.306486   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 57/120
	I0425 18:56:15.308705   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 58/120
	I0425 18:56:16.309869   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 59/120
	I0425 18:56:17.311644   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 60/120
	I0425 18:56:18.313097   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 61/120
	I0425 18:56:19.314588   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 62/120
	I0425 18:56:20.316908   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 63/120
	I0425 18:56:21.318457   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 64/120
	I0425 18:56:22.320370   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 65/120
	I0425 18:56:23.322003   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 66/120
	I0425 18:56:24.323745   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 67/120
	I0425 18:56:25.325425   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 68/120
	I0425 18:56:26.327080   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 69/120
	I0425 18:56:27.328554   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 70/120
	I0425 18:56:28.330460   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 71/120
	I0425 18:56:29.332611   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 72/120
	I0425 18:56:30.334052   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 73/120
	I0425 18:56:31.335501   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 74/120
	I0425 18:56:32.337361   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 75/120
	I0425 18:56:33.338664   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 76/120
	I0425 18:56:34.340700   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 77/120
	I0425 18:56:35.341975   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 78/120
	I0425 18:56:36.343425   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 79/120
	I0425 18:56:37.344803   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 80/120
	I0425 18:56:38.346083   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 81/120
	I0425 18:56:39.347602   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 82/120
	I0425 18:56:40.348824   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 83/120
	I0425 18:56:41.350432   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 84/120
	I0425 18:56:42.352322   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 85/120
	I0425 18:56:43.353622   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 86/120
	I0425 18:56:44.355087   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 87/120
	I0425 18:56:45.357339   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 88/120
	I0425 18:56:46.358791   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 89/120
	I0425 18:56:47.360372   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 90/120
	I0425 18:56:48.361713   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 91/120
	I0425 18:56:49.363712   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 92/120
	I0425 18:56:50.365185   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 93/120
	I0425 18:56:51.366607   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 94/120
	I0425 18:56:52.368612   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 95/120
	I0425 18:56:53.369935   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 96/120
	I0425 18:56:54.371782   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 97/120
	I0425 18:56:55.373175   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 98/120
	I0425 18:56:56.375200   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 99/120
	I0425 18:56:57.377358   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 100/120
	I0425 18:56:58.378670   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 101/120
	I0425 18:56:59.380854   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 102/120
	I0425 18:57:00.382414   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 103/120
	I0425 18:57:01.384647   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 104/120
	I0425 18:57:02.386687   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 105/120
	I0425 18:57:03.388800   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 106/120
	I0425 18:57:04.390331   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 107/120
	I0425 18:57:05.392648   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 108/120
	I0425 18:57:06.394091   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 109/120
	I0425 18:57:07.395967   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 110/120
	I0425 18:57:08.397498   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 111/120
	I0425 18:57:09.398946   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 112/120
	I0425 18:57:10.400499   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 113/120
	I0425 18:57:11.401820   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 114/120
	I0425 18:57:12.403256   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 115/120
	I0425 18:57:13.405442   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 116/120
	I0425 18:57:14.407470   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 117/120
	I0425 18:57:15.408734   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 118/120
	I0425 18:57:16.410359   28522 main.go:141] libmachine: (ha-912667-m02) Waiting for machine to stop 119/120
	I0425 18:57:17.411880   28522 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0425 18:57:17.411999   28522 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-912667 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr: exit status 3 (19.044829114s)

                                                
                                                
-- stdout --
	ha-912667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-912667-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:57:17.467075   28945 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:57:17.467186   28945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:57:17.467195   28945 out.go:304] Setting ErrFile to fd 2...
	I0425 18:57:17.467198   28945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:57:17.467389   28945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:57:17.467549   28945 out.go:298] Setting JSON to false
	I0425 18:57:17.467572   28945 mustload.go:65] Loading cluster: ha-912667
	I0425 18:57:17.467624   28945 notify.go:220] Checking for updates...
	I0425 18:57:17.467941   28945 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:57:17.467954   28945 status.go:255] checking status of ha-912667 ...
	I0425 18:57:17.468331   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:17.468394   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:17.485328   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0425 18:57:17.485740   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:17.486409   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:17.486438   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:17.486808   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:17.487027   28945 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:57:17.488632   28945 status.go:330] ha-912667 host status = "Running" (err=<nil>)
	I0425 18:57:17.488652   28945 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:57:17.488937   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:17.488988   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:17.503953   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0425 18:57:17.504335   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:17.504836   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:17.504859   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:17.505116   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:17.505265   28945 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:57:17.507847   28945 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:17.508273   28945 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:57:17.508309   28945 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:17.508432   28945 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:57:17.508711   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:17.508744   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:17.522894   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37177
	I0425 18:57:17.523274   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:17.523710   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:17.523731   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:17.524051   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:17.524229   28945 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:57:17.524435   28945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:17.524464   28945 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:57:17.527117   28945 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:17.527529   28945 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:57:17.527550   28945 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:17.527698   28945 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:57:17.527848   28945 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:57:17.528023   28945 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:57:17.528179   28945 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:57:17.618311   28945 ssh_runner.go:195] Run: systemctl --version
	I0425 18:57:17.627217   28945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:17.649677   28945 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:57:17.649711   28945 api_server.go:166] Checking apiserver status ...
	I0425 18:57:17.649761   28945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:57:17.669758   28945 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0425 18:57:17.682658   28945 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:57:17.682718   28945 ssh_runner.go:195] Run: ls
	I0425 18:57:17.688408   28945 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:57:17.693760   28945 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:57:17.693797   28945 status.go:422] ha-912667 apiserver status = Running (err=<nil>)
	I0425 18:57:17.693807   28945 status.go:257] ha-912667 status: &{Name:ha-912667 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:57:17.693828   28945 status.go:255] checking status of ha-912667-m02 ...
	I0425 18:57:17.694619   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:17.694670   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:17.710279   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46689
	I0425 18:57:17.710674   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:17.711067   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:17.711096   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:17.711473   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:17.711694   28945 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 18:57:17.713418   28945 status.go:330] ha-912667-m02 host status = "Running" (err=<nil>)
	I0425 18:57:17.713436   28945 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:57:17.713715   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:17.713747   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:17.729617   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42681
	I0425 18:57:17.730100   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:17.730598   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:17.730623   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:17.730972   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:17.731152   28945 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:57:17.733985   28945 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:17.734384   28945 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:57:17.734413   28945 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:17.734579   28945 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:57:17.734913   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:17.734958   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:17.749979   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36021
	I0425 18:57:17.750507   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:17.751107   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:17.751134   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:17.751492   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:17.751663   28945 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:57:17.751841   28945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:17.751858   28945 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:57:17.754880   28945 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:17.755224   28945 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:57:17.755245   28945 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:17.755473   28945 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:57:17.755672   28945 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:57:17.755837   28945 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:57:17.755993   28945 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	W0425 18:57:36.074442   28945 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.66:22: connect: no route to host
	W0425 18:57:36.074583   28945 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	E0425 18:57:36.074602   28945 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:57:36.074609   28945 status.go:257] ha-912667-m02 status: &{Name:ha-912667-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 18:57:36.074631   28945 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:57:36.074641   28945 status.go:255] checking status of ha-912667-m03 ...
	I0425 18:57:36.074976   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:36.075028   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:36.090425   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39545
	I0425 18:57:36.090882   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:36.091339   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:36.091360   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:36.091657   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:36.091835   28945 main.go:141] libmachine: (ha-912667-m03) Calling .GetState
	I0425 18:57:36.093545   28945 status.go:330] ha-912667-m03 host status = "Running" (err=<nil>)
	I0425 18:57:36.093561   28945 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:57:36.093949   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:36.093992   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:36.110567   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33271
	I0425 18:57:36.111018   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:36.111499   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:36.111525   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:36.111903   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:36.112122   28945 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:57:36.115068   28945 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:36.115484   28945 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:57:36.115509   28945 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:36.115661   28945 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:57:36.115975   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:36.116021   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:36.131328   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0425 18:57:36.131694   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:36.132056   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:36.132079   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:36.132371   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:36.132544   28945 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:57:36.132738   28945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:36.132762   28945 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:57:36.135506   28945 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:36.135919   28945 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:57:36.135949   28945 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:36.136091   28945 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:57:36.136275   28945 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:57:36.136434   28945 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:57:36.136567   28945 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:57:36.225762   28945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:36.248755   28945 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:57:36.248789   28945 api_server.go:166] Checking apiserver status ...
	I0425 18:57:36.248839   28945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:57:36.267093   28945 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0425 18:57:36.277781   28945 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:57:36.277832   28945 ssh_runner.go:195] Run: ls
	I0425 18:57:36.282804   28945 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:57:36.289073   28945 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:57:36.289098   28945 status.go:422] ha-912667-m03 apiserver status = Running (err=<nil>)
	I0425 18:57:36.289107   28945 status.go:257] ha-912667-m03 status: &{Name:ha-912667-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:57:36.289125   28945 status.go:255] checking status of ha-912667-m04 ...
	I0425 18:57:36.289415   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:36.289449   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:36.304227   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I0425 18:57:36.304654   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:36.305100   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:36.305120   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:36.305439   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:36.305617   28945 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 18:57:36.307042   28945 status.go:330] ha-912667-m04 host status = "Running" (err=<nil>)
	I0425 18:57:36.307055   28945 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:57:36.307320   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:36.307358   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:36.322691   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I0425 18:57:36.323097   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:36.323534   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:36.323553   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:36.323868   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:36.324053   28945 main.go:141] libmachine: (ha-912667-m04) Calling .GetIP
	I0425 18:57:36.326792   28945 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:36.327220   28945 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:57:36.327249   28945 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:36.327354   28945 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:57:36.327651   28945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:36.327685   28945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:36.343197   28945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I0425 18:57:36.343629   28945 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:36.344118   28945 main.go:141] libmachine: Using API Version  1
	I0425 18:57:36.344137   28945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:36.344429   28945 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:36.344611   28945 main.go:141] libmachine: (ha-912667-m04) Calling .DriverName
	I0425 18:57:36.344809   28945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:36.344839   28945 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHHostname
	I0425 18:57:36.347752   28945 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:36.348165   28945 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:57:36.348187   28945 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:36.348353   28945 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHPort
	I0425 18:57:36.348518   28945 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHKeyPath
	I0425 18:57:36.348655   28945 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHUsername
	I0425 18:57:36.348789   28945 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m04/id_rsa Username:docker}
	I0425 18:57:36.437092   28945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:36.456310   28945 status.go:257] ha-912667-m04 status: &{Name:ha-912667-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-912667 -n ha-912667
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-912667 logs -n 25: (1.550626598s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile60710412/001/cp-test_ha-912667-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667:/home/docker/cp-test_ha-912667-m03_ha-912667.txt                     |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667 sudo cat                                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m03_ha-912667.txt                               |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m02:/home/docker/cp-test_ha-912667-m03_ha-912667-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m02 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m03_ha-912667-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04:/home/docker/cp-test_ha-912667-m03_ha-912667-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m04 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m03_ha-912667-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp testdata/cp-test.txt                                              | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile60710412/001/cp-test_ha-912667-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667:/home/docker/cp-test_ha-912667-m04_ha-912667.txt                     |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667 sudo cat                                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667.txt                               |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m02:/home/docker/cp-test_ha-912667-m04_ha-912667-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m02 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03:/home/docker/cp-test_ha-912667-m04_ha-912667-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m03 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-912667 node stop m02 -v=7                                                   | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 18:49:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 18:49:35.469800   24262 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:49:35.471114   24262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:49:35.471131   24262 out.go:304] Setting ErrFile to fd 2...
	I0425 18:49:35.471138   24262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:49:35.471361   24262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:49:35.471966   24262 out.go:298] Setting JSON to false
	I0425 18:49:35.472851   24262 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1921,"bootTime":1714069054,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 18:49:35.472907   24262 start.go:139] virtualization: kvm guest
	I0425 18:49:35.474690   24262 out.go:177] * [ha-912667] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 18:49:35.476293   24262 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 18:49:35.477409   24262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 18:49:35.476293   24262 notify.go:220] Checking for updates...
	I0425 18:49:35.479776   24262 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:49:35.481005   24262 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:49:35.482165   24262 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 18:49:35.483400   24262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 18:49:35.484732   24262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 18:49:35.518402   24262 out.go:177] * Using the kvm2 driver based on user configuration
	I0425 18:49:35.519738   24262 start.go:297] selected driver: kvm2
	I0425 18:49:35.519755   24262 start.go:901] validating driver "kvm2" against <nil>
	I0425 18:49:35.519768   24262 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 18:49:35.520503   24262 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:49:35.520593   24262 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 18:49:35.535933   24262 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 18:49:35.536000   24262 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 18:49:35.536268   24262 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 18:49:35.536333   24262 cni.go:84] Creating CNI manager for ""
	I0425 18:49:35.536349   24262 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0425 18:49:35.536356   24262 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0425 18:49:35.536451   24262 start.go:340] cluster config:
	{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0425 18:49:35.536583   24262 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:49:35.538666   24262 out.go:177] * Starting "ha-912667" primary control-plane node in "ha-912667" cluster
	I0425 18:49:35.539979   24262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:49:35.540029   24262 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 18:49:35.540041   24262 cache.go:56] Caching tarball of preloaded images
	I0425 18:49:35.540151   24262 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 18:49:35.540163   24262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 18:49:35.540499   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:49:35.540524   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json: {Name:mkaea86dc7c947902746e075d4b5d6d393bd8935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:49:35.540659   24262 start.go:360] acquireMachinesLock for ha-912667: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 18:49:35.540696   24262 start.go:364] duration metric: took 18.658µs to acquireMachinesLock for "ha-912667"
	I0425 18:49:35.540713   24262 start.go:93] Provisioning new machine with config: &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:49:35.540771   24262 start.go:125] createHost starting for "" (driver="kvm2")
	I0425 18:49:35.542390   24262 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0425 18:49:35.542512   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:49:35.542554   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:49:35.557109   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0425 18:49:35.557528   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:49:35.558113   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:49:35.558132   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:49:35.558453   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:49:35.558626   24262 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 18:49:35.558764   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:49:35.558892   24262 start.go:159] libmachine.API.Create for "ha-912667" (driver="kvm2")
	I0425 18:49:35.558954   24262 client.go:168] LocalClient.Create starting
	I0425 18:49:35.558992   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 18:49:35.559036   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:49:35.559057   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:49:35.559118   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 18:49:35.559142   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:49:35.559160   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:49:35.559183   24262 main.go:141] libmachine: Running pre-create checks...
	I0425 18:49:35.559195   24262 main.go:141] libmachine: (ha-912667) Calling .PreCreateCheck
	I0425 18:49:35.559546   24262 main.go:141] libmachine: (ha-912667) Calling .GetConfigRaw
	I0425 18:49:35.559939   24262 main.go:141] libmachine: Creating machine...
	I0425 18:49:35.559951   24262 main.go:141] libmachine: (ha-912667) Calling .Create
	I0425 18:49:35.560081   24262 main.go:141] libmachine: (ha-912667) Creating KVM machine...
	I0425 18:49:35.561210   24262 main.go:141] libmachine: (ha-912667) DBG | found existing default KVM network
	I0425 18:49:35.561889   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:35.561704   24285 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001125e0}
	I0425 18:49:35.561932   24262 main.go:141] libmachine: (ha-912667) DBG | created network xml: 
	I0425 18:49:35.561949   24262 main.go:141] libmachine: (ha-912667) DBG | <network>
	I0425 18:49:35.561960   24262 main.go:141] libmachine: (ha-912667) DBG |   <name>mk-ha-912667</name>
	I0425 18:49:35.561973   24262 main.go:141] libmachine: (ha-912667) DBG |   <dns enable='no'/>
	I0425 18:49:35.561982   24262 main.go:141] libmachine: (ha-912667) DBG |   
	I0425 18:49:35.561995   24262 main.go:141] libmachine: (ha-912667) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0425 18:49:35.562005   24262 main.go:141] libmachine: (ha-912667) DBG |     <dhcp>
	I0425 18:49:35.562031   24262 main.go:141] libmachine: (ha-912667) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0425 18:49:35.562049   24262 main.go:141] libmachine: (ha-912667) DBG |     </dhcp>
	I0425 18:49:35.562080   24262 main.go:141] libmachine: (ha-912667) DBG |   </ip>
	I0425 18:49:35.562102   24262 main.go:141] libmachine: (ha-912667) DBG |   
	I0425 18:49:35.562115   24262 main.go:141] libmachine: (ha-912667) DBG | </network>
	I0425 18:49:35.562125   24262 main.go:141] libmachine: (ha-912667) DBG | 
	I0425 18:49:35.567221   24262 main.go:141] libmachine: (ha-912667) DBG | trying to create private KVM network mk-ha-912667 192.168.39.0/24...
	I0425 18:49:35.630513   24262 main.go:141] libmachine: (ha-912667) DBG | private KVM network mk-ha-912667 192.168.39.0/24 created
	I0425 18:49:35.630541   24262 main.go:141] libmachine: (ha-912667) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667 ...
	I0425 18:49:35.630558   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:35.630503   24285 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:49:35.630574   24262 main.go:141] libmachine: (ha-912667) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 18:49:35.630637   24262 main.go:141] libmachine: (ha-912667) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 18:49:35.856167   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:35.856020   24285 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa...
	I0425 18:49:35.993843   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:35.993741   24285 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/ha-912667.rawdisk...
	I0425 18:49:35.993892   24262 main.go:141] libmachine: (ha-912667) DBG | Writing magic tar header
	I0425 18:49:35.993902   24262 main.go:141] libmachine: (ha-912667) DBG | Writing SSH key tar header
	I0425 18:49:35.993911   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:35.993856   24285 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667 ...
	I0425 18:49:35.993985   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667
	I0425 18:49:35.994012   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667 (perms=drwx------)
	I0425 18:49:35.994025   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 18:49:35.994041   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:49:35.994051   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 18:49:35.994060   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 18:49:35.994069   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins
	I0425 18:49:35.994079   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home
	I0425 18:49:35.994097   24262 main.go:141] libmachine: (ha-912667) DBG | Skipping /home - not owner
	I0425 18:49:35.994111   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 18:49:35.994129   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 18:49:35.994141   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 18:49:35.994153   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 18:49:35.994165   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 18:49:35.994182   24262 main.go:141] libmachine: (ha-912667) Creating domain...
	I0425 18:49:35.995208   24262 main.go:141] libmachine: (ha-912667) define libvirt domain using xml: 
	I0425 18:49:35.995226   24262 main.go:141] libmachine: (ha-912667) <domain type='kvm'>
	I0425 18:49:35.995235   24262 main.go:141] libmachine: (ha-912667)   <name>ha-912667</name>
	I0425 18:49:35.995242   24262 main.go:141] libmachine: (ha-912667)   <memory unit='MiB'>2200</memory>
	I0425 18:49:35.995250   24262 main.go:141] libmachine: (ha-912667)   <vcpu>2</vcpu>
	I0425 18:49:35.995256   24262 main.go:141] libmachine: (ha-912667)   <features>
	I0425 18:49:35.995264   24262 main.go:141] libmachine: (ha-912667)     <acpi/>
	I0425 18:49:35.995270   24262 main.go:141] libmachine: (ha-912667)     <apic/>
	I0425 18:49:35.995275   24262 main.go:141] libmachine: (ha-912667)     <pae/>
	I0425 18:49:35.995280   24262 main.go:141] libmachine: (ha-912667)     
	I0425 18:49:35.995288   24262 main.go:141] libmachine: (ha-912667)   </features>
	I0425 18:49:35.995293   24262 main.go:141] libmachine: (ha-912667)   <cpu mode='host-passthrough'>
	I0425 18:49:35.995308   24262 main.go:141] libmachine: (ha-912667)   
	I0425 18:49:35.995325   24262 main.go:141] libmachine: (ha-912667)   </cpu>
	I0425 18:49:35.995334   24262 main.go:141] libmachine: (ha-912667)   <os>
	I0425 18:49:35.995344   24262 main.go:141] libmachine: (ha-912667)     <type>hvm</type>
	I0425 18:49:35.995362   24262 main.go:141] libmachine: (ha-912667)     <boot dev='cdrom'/>
	I0425 18:49:35.995369   24262 main.go:141] libmachine: (ha-912667)     <boot dev='hd'/>
	I0425 18:49:35.995379   24262 main.go:141] libmachine: (ha-912667)     <bootmenu enable='no'/>
	I0425 18:49:35.995396   24262 main.go:141] libmachine: (ha-912667)   </os>
	I0425 18:49:35.995416   24262 main.go:141] libmachine: (ha-912667)   <devices>
	I0425 18:49:35.995433   24262 main.go:141] libmachine: (ha-912667)     <disk type='file' device='cdrom'>
	I0425 18:49:35.995441   24262 main.go:141] libmachine: (ha-912667)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/boot2docker.iso'/>
	I0425 18:49:35.995449   24262 main.go:141] libmachine: (ha-912667)       <target dev='hdc' bus='scsi'/>
	I0425 18:49:35.995457   24262 main.go:141] libmachine: (ha-912667)       <readonly/>
	I0425 18:49:35.995463   24262 main.go:141] libmachine: (ha-912667)     </disk>
	I0425 18:49:35.995468   24262 main.go:141] libmachine: (ha-912667)     <disk type='file' device='disk'>
	I0425 18:49:35.995476   24262 main.go:141] libmachine: (ha-912667)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 18:49:35.995486   24262 main.go:141] libmachine: (ha-912667)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/ha-912667.rawdisk'/>
	I0425 18:49:35.995493   24262 main.go:141] libmachine: (ha-912667)       <target dev='hda' bus='virtio'/>
	I0425 18:49:35.995498   24262 main.go:141] libmachine: (ha-912667)     </disk>
	I0425 18:49:35.995505   24262 main.go:141] libmachine: (ha-912667)     <interface type='network'>
	I0425 18:49:35.995533   24262 main.go:141] libmachine: (ha-912667)       <source network='mk-ha-912667'/>
	I0425 18:49:35.995560   24262 main.go:141] libmachine: (ha-912667)       <model type='virtio'/>
	I0425 18:49:35.995575   24262 main.go:141] libmachine: (ha-912667)     </interface>
	I0425 18:49:35.995588   24262 main.go:141] libmachine: (ha-912667)     <interface type='network'>
	I0425 18:49:35.995602   24262 main.go:141] libmachine: (ha-912667)       <source network='default'/>
	I0425 18:49:35.995614   24262 main.go:141] libmachine: (ha-912667)       <model type='virtio'/>
	I0425 18:49:35.995628   24262 main.go:141] libmachine: (ha-912667)     </interface>
	I0425 18:49:35.995645   24262 main.go:141] libmachine: (ha-912667)     <serial type='pty'>
	I0425 18:49:35.995668   24262 main.go:141] libmachine: (ha-912667)       <target port='0'/>
	I0425 18:49:35.995680   24262 main.go:141] libmachine: (ha-912667)     </serial>
	I0425 18:49:35.995694   24262 main.go:141] libmachine: (ha-912667)     <console type='pty'>
	I0425 18:49:35.995706   24262 main.go:141] libmachine: (ha-912667)       <target type='serial' port='0'/>
	I0425 18:49:35.995723   24262 main.go:141] libmachine: (ha-912667)     </console>
	I0425 18:49:35.995741   24262 main.go:141] libmachine: (ha-912667)     <rng model='virtio'>
	I0425 18:49:35.995755   24262 main.go:141] libmachine: (ha-912667)       <backend model='random'>/dev/random</backend>
	I0425 18:49:35.995765   24262 main.go:141] libmachine: (ha-912667)     </rng>
	I0425 18:49:35.995777   24262 main.go:141] libmachine: (ha-912667)     
	I0425 18:49:35.995788   24262 main.go:141] libmachine: (ha-912667)     
	I0425 18:49:35.995801   24262 main.go:141] libmachine: (ha-912667)   </devices>
	I0425 18:49:35.995828   24262 main.go:141] libmachine: (ha-912667) </domain>
	I0425 18:49:35.995844   24262 main.go:141] libmachine: (ha-912667) 
	I0425 18:49:36.001722   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:d3:aa:e8 in network default
	I0425 18:49:36.002318   24262 main.go:141] libmachine: (ha-912667) Ensuring networks are active...
	I0425 18:49:36.002336   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:36.002959   24262 main.go:141] libmachine: (ha-912667) Ensuring network default is active
	I0425 18:49:36.003284   24262 main.go:141] libmachine: (ha-912667) Ensuring network mk-ha-912667 is active
	I0425 18:49:36.003742   24262 main.go:141] libmachine: (ha-912667) Getting domain xml...
	I0425 18:49:36.004540   24262 main.go:141] libmachine: (ha-912667) Creating domain...
	I0425 18:49:37.173393   24262 main.go:141] libmachine: (ha-912667) Waiting to get IP...
	I0425 18:49:37.174284   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:37.174672   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:37.174707   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:37.174646   24285 retry.go:31] will retry after 292.650601ms: waiting for machine to come up
	I0425 18:49:37.469205   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:37.469643   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:37.469668   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:37.469606   24285 retry.go:31] will retry after 373.276627ms: waiting for machine to come up
	I0425 18:49:37.844039   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:37.844434   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:37.844463   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:37.844403   24285 retry.go:31] will retry after 343.112246ms: waiting for machine to come up
	I0425 18:49:38.188940   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:38.189427   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:38.189458   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:38.189371   24285 retry.go:31] will retry after 489.386145ms: waiting for machine to come up
	I0425 18:49:38.679903   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:38.680379   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:38.680404   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:38.680331   24285 retry.go:31] will retry after 598.945496ms: waiting for machine to come up
	I0425 18:49:39.281509   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:39.282156   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:39.282185   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:39.282091   24285 retry.go:31] will retry after 639.572202ms: waiting for machine to come up
	I0425 18:49:39.922960   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:39.923304   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:39.923348   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:39.923279   24285 retry.go:31] will retry after 876.557847ms: waiting for machine to come up
	I0425 18:49:40.801689   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:40.802099   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:40.802125   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:40.802048   24285 retry.go:31] will retry after 1.040148124s: waiting for machine to come up
	I0425 18:49:41.844086   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:41.844488   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:41.844511   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:41.844457   24285 retry.go:31] will retry after 1.811704814s: waiting for machine to come up
	I0425 18:49:43.658521   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:43.658930   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:43.658974   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:43.658892   24285 retry.go:31] will retry after 2.216558346s: waiting for machine to come up
	I0425 18:49:45.877597   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:45.878014   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:45.878037   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:45.877971   24285 retry.go:31] will retry after 2.176487509s: waiting for machine to come up
	I0425 18:49:48.057321   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:48.057761   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:48.057782   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:48.057727   24285 retry.go:31] will retry after 3.000506427s: waiting for machine to come up
	I0425 18:49:51.059530   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:51.059895   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:51.059925   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:51.059865   24285 retry.go:31] will retry after 4.068045939s: waiting for machine to come up
	I0425 18:49:55.133027   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:55.133367   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:55.133405   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:55.133336   24285 retry.go:31] will retry after 4.1493096s: waiting for machine to come up
	I0425 18:49:59.286531   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.286979   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has current primary IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.286997   24262 main.go:141] libmachine: (ha-912667) Found IP for machine: 192.168.39.189
	I0425 18:49:59.287009   24262 main.go:141] libmachine: (ha-912667) Reserving static IP address...
	I0425 18:49:59.287351   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find host DHCP lease matching {name: "ha-912667", mac: "52:54:00:f2:04:73", ip: "192.168.39.189"} in network mk-ha-912667
	I0425 18:49:59.357601   24262 main.go:141] libmachine: (ha-912667) DBG | Getting to WaitForSSH function...
	I0425 18:49:59.357637   24262 main.go:141] libmachine: (ha-912667) Reserved static IP address: 192.168.39.189
	I0425 18:49:59.357652   24262 main.go:141] libmachine: (ha-912667) Waiting for SSH to be available...
	I0425 18:49:59.359971   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.360382   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.360419   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.360569   24262 main.go:141] libmachine: (ha-912667) DBG | Using SSH client type: external
	I0425 18:49:59.360693   24262 main.go:141] libmachine: (ha-912667) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa (-rw-------)
	I0425 18:49:59.360740   24262 main.go:141] libmachine: (ha-912667) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 18:49:59.360760   24262 main.go:141] libmachine: (ha-912667) DBG | About to run SSH command:
	I0425 18:49:59.360782   24262 main.go:141] libmachine: (ha-912667) DBG | exit 0
	I0425 18:49:59.486690   24262 main.go:141] libmachine: (ha-912667) DBG | SSH cmd err, output: <nil>: 
	I0425 18:49:59.487035   24262 main.go:141] libmachine: (ha-912667) KVM machine creation complete!
	I0425 18:49:59.487328   24262 main.go:141] libmachine: (ha-912667) Calling .GetConfigRaw
	I0425 18:49:59.487862   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:49:59.488044   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:49:59.488201   24262 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 18:49:59.488215   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:49:59.489328   24262 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 18:49:59.489345   24262 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 18:49:59.489353   24262 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 18:49:59.489361   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:49:59.491781   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.492187   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.492209   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.492390   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:49:59.492569   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.492707   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.492898   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:49:59.493059   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:49:59.493269   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:49:59.493282   24262 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 18:49:59.597859   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:49:59.597881   24262 main.go:141] libmachine: Detecting the provisioner...
	I0425 18:49:59.597888   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:49:59.600514   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.601001   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.601024   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.601244   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:49:59.601430   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.601622   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.601749   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:49:59.601909   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:49:59.602101   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:49:59.602114   24262 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 18:49:59.707580   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 18:49:59.707693   24262 main.go:141] libmachine: found compatible host: buildroot
	I0425 18:49:59.707710   24262 main.go:141] libmachine: Provisioning with buildroot...
	I0425 18:49:59.707721   24262 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 18:49:59.707946   24262 buildroot.go:166] provisioning hostname "ha-912667"
	I0425 18:49:59.707968   24262 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 18:49:59.708146   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:49:59.710647   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.710956   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.710980   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.711109   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:49:59.711269   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.711438   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.711546   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:49:59.711691   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:49:59.711910   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:49:59.711925   24262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-912667 && echo "ha-912667" | sudo tee /etc/hostname
	I0425 18:49:59.828703   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-912667
	
	I0425 18:49:59.828734   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:49:59.831060   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.831343   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.831366   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.831508   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:49:59.831698   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.831855   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.831988   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:49:59.832154   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:49:59.832352   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:49:59.832371   24262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-912667' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-912667/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-912667' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 18:49:59.948805   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:49:59.948828   24262 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 18:49:59.948855   24262 buildroot.go:174] setting up certificates
	I0425 18:49:59.948868   24262 provision.go:84] configureAuth start
	I0425 18:49:59.948886   24262 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 18:49:59.949136   24262 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:49:59.951730   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.952034   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.952058   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.952239   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:49:59.954284   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.954602   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.954626   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.954721   24262 provision.go:143] copyHostCerts
	I0425 18:49:59.954748   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:49:59.954784   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 18:49:59.954793   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:49:59.954864   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 18:49:59.954948   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:49:59.954965   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 18:49:59.954971   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:49:59.954995   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 18:49:59.955045   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:49:59.955060   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 18:49:59.955067   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:49:59.955086   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 18:49:59.955147   24262 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.ha-912667 san=[127.0.0.1 192.168.39.189 ha-912667 localhost minikube]
	I0425 18:50:00.008083   24262 provision.go:177] copyRemoteCerts
	I0425 18:50:00.008153   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 18:50:00.008173   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.010697   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.011011   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.011037   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.011221   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.011406   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.011519   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.011653   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:00.093508   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 18:50:00.093584   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 18:50:00.122848   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 18:50:00.122936   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0425 18:50:00.148658   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 18:50:00.148732   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 18:50:00.176370   24262 provision.go:87] duration metric: took 227.48225ms to configureAuth
	I0425 18:50:00.176402   24262 buildroot.go:189] setting minikube options for container-runtime
	I0425 18:50:00.176571   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:50:00.176636   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.179236   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.179633   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.179663   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.179801   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.180003   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.180202   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.180346   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.180551   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:50:00.180731   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:50:00.180749   24262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 18:50:00.460168   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 18:50:00.460201   24262 main.go:141] libmachine: Checking connection to Docker...
	I0425 18:50:00.460211   24262 main.go:141] libmachine: (ha-912667) Calling .GetURL
	I0425 18:50:00.461407   24262 main.go:141] libmachine: (ha-912667) DBG | Using libvirt version 6000000
	I0425 18:50:00.463582   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.463894   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.463923   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.464068   24262 main.go:141] libmachine: Docker is up and running!
	I0425 18:50:00.464080   24262 main.go:141] libmachine: Reticulating splines...
	I0425 18:50:00.464086   24262 client.go:171] duration metric: took 24.905122677s to LocalClient.Create
	I0425 18:50:00.464104   24262 start.go:167] duration metric: took 24.905214044s to libmachine.API.Create "ha-912667"
	I0425 18:50:00.464114   24262 start.go:293] postStartSetup for "ha-912667" (driver="kvm2")
	I0425 18:50:00.464122   24262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 18:50:00.464136   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:00.464353   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 18:50:00.464378   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.466261   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.466584   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.466608   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.466746   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.466934   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.467082   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.467205   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:00.550088   24262 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 18:50:00.554948   24262 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 18:50:00.554981   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 18:50:00.555075   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 18:50:00.555159   24262 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 18:50:00.555170   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 18:50:00.555268   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 18:50:00.566291   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:50:00.592708   24262 start.go:296] duration metric: took 128.58284ms for postStartSetup
	I0425 18:50:00.592746   24262 main.go:141] libmachine: (ha-912667) Calling .GetConfigRaw
	I0425 18:50:00.593257   24262 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:50:00.595651   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.595948   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.595971   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.596220   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:50:00.596379   24262 start.go:128] duration metric: took 25.055600373s to createHost
	I0425 18:50:00.596401   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.598325   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.598586   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.598619   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.598758   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.598933   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.599086   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.599189   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.599306   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:50:00.599501   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:50:00.599527   24262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 18:50:00.707663   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714071000.689845653
	
	I0425 18:50:00.707685   24262 fix.go:216] guest clock: 1714071000.689845653
	I0425 18:50:00.707693   24262 fix.go:229] Guest: 2024-04-25 18:50:00.689845653 +0000 UTC Remote: 2024-04-25 18:50:00.596390759 +0000 UTC m=+25.171804641 (delta=93.454894ms)
	I0425 18:50:00.707725   24262 fix.go:200] guest clock delta is within tolerance: 93.454894ms
	I0425 18:50:00.707730   24262 start.go:83] releasing machines lock for "ha-912667", held for 25.167025439s
	I0425 18:50:00.707751   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:00.708001   24262 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:50:00.710414   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.710715   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.710760   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.710914   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:00.711428   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:00.711611   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:00.711706   24262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 18:50:00.711746   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.711782   24262 ssh_runner.go:195] Run: cat /version.json
	I0425 18:50:00.711808   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.714262   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.714601   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.714633   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.714661   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.714762   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.714916   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.714960   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.714982   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.715074   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.715179   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.715225   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:00.715303   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.715396   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.715518   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:00.792222   24262 ssh_runner.go:195] Run: systemctl --version
	I0425 18:50:00.817107   24262 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 18:50:00.978976   24262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 18:50:00.985481   24262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 18:50:00.985547   24262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 18:50:01.002497   24262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 18:50:01.002518   24262 start.go:494] detecting cgroup driver to use...
	I0425 18:50:01.002565   24262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 18:50:01.018272   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 18:50:01.032711   24262 docker.go:217] disabling cri-docker service (if available) ...
	I0425 18:50:01.032776   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 18:50:01.046860   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 18:50:01.060895   24262 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 18:50:01.180129   24262 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 18:50:01.352614   24262 docker.go:233] disabling docker service ...
	I0425 18:50:01.352697   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 18:50:01.369345   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 18:50:01.384253   24262 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 18:50:01.514717   24262 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 18:50:01.637248   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 18:50:01.652388   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 18:50:01.673257   24262 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 18:50:01.673329   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.685625   24262 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 18:50:01.685714   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.698390   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.710705   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.722948   24262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 18:50:01.735752   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.748133   24262 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.767545   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.780135   24262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 18:50:01.791443   24262 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 18:50:01.791500   24262 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 18:50:01.807418   24262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 18:50:01.819224   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:50:01.954389   24262 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 18:50:02.109149   24262 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 18:50:02.109219   24262 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 18:50:02.114436   24262 start.go:562] Will wait 60s for crictl version
	I0425 18:50:02.114482   24262 ssh_runner.go:195] Run: which crictl
	I0425 18:50:02.118484   24262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 18:50:02.160407   24262 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 18:50:02.160522   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:50:02.192176   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:50:02.225009   24262 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 18:50:02.226615   24262 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:50:02.228982   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:02.229338   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:02.229368   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:02.229652   24262 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 18:50:02.234282   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:50:02.249719   24262 kubeadm.go:877] updating cluster {Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 18:50:02.249826   24262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:50:02.249867   24262 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 18:50:02.286423   24262 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 18:50:02.286483   24262 ssh_runner.go:195] Run: which lz4
	I0425 18:50:02.290889   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0425 18:50:02.290983   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 18:50:02.295888   24262 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 18:50:02.295912   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 18:50:03.988836   24262 crio.go:462] duration metric: took 1.697878668s to copy over tarball
	I0425 18:50:03.988895   24262 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 18:50:06.456388   24262 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.4674596s)
	I0425 18:50:06.456425   24262 crio.go:469] duration metric: took 2.467561699s to extract the tarball
	I0425 18:50:06.456434   24262 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 18:50:06.495294   24262 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 18:50:06.547133   24262 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 18:50:06.547154   24262 cache_images.go:84] Images are preloaded, skipping loading
	I0425 18:50:06.547164   24262 kubeadm.go:928] updating node { 192.168.39.189 8443 v1.30.0 crio true true} ...
	I0425 18:50:06.547268   24262 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-912667 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 18:50:06.547359   24262 ssh_runner.go:195] Run: crio config
	I0425 18:50:06.593864   24262 cni.go:84] Creating CNI manager for ""
	I0425 18:50:06.593888   24262 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0425 18:50:06.593900   24262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 18:50:06.593930   24262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-912667 NodeName:ha-912667 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 18:50:06.594091   24262 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-912667"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.189
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 18:50:06.594120   24262 kube-vip.go:111] generating kube-vip config ...
	I0425 18:50:06.594167   24262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0425 18:50:06.616921   24262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0425 18:50:06.617049   24262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0425 18:50:06.617132   24262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 18:50:06.633591   24262 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 18:50:06.633648   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0425 18:50:06.644675   24262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0425 18:50:06.663438   24262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 18:50:06.681860   24262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0425 18:50:06.700503   24262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0425 18:50:06.719035   24262 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0425 18:50:06.723411   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:50:06.736636   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:50:06.881784   24262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:50:06.900951   24262 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667 for IP: 192.168.39.189
	I0425 18:50:06.900979   24262 certs.go:194] generating shared ca certs ...
	I0425 18:50:06.900999   24262 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:06.901213   24262 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 18:50:06.901275   24262 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 18:50:06.901296   24262 certs.go:256] generating profile certs ...
	I0425 18:50:06.901364   24262 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key
	I0425 18:50:06.901385   24262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt with IP's: []
	I0425 18:50:07.197964   24262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt ...
	I0425 18:50:07.197995   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt: {Name:mkc3ff1f172713a4c9e99916dbf5dd6d8bd441d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.198153   24262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key ...
	I0425 18:50:07.198164   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key: {Name:mkc518be03db694a05e374dc619217f41b49d35f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.198253   24262 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.11613977
	I0425 18:50:07.198267   24262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.11613977 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189 192.168.39.254]
	I0425 18:50:07.355394   24262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.11613977 ...
	I0425 18:50:07.355429   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.11613977: {Name:mk81b9c860a5f69befde658e1feebb2f32b35f6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.355573   24262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.11613977 ...
	I0425 18:50:07.355585   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.11613977: {Name:mke84934957246a63a3f2ef2d488b41d02efc4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.355650   24262 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.11613977 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt
	I0425 18:50:07.355721   24262 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.11613977 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key
	I0425 18:50:07.355771   24262 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key
	I0425 18:50:07.355785   24262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt with IP's: []
	I0425 18:50:07.433932   24262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt ...
	I0425 18:50:07.433962   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt: {Name:mk3a035fbc85b97c96ad782548ea30273a035173 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.434109   24262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key ...
	I0425 18:50:07.434119   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key: {Name:mk5185d04df7e21e25a0334444109356dcf25f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.434179   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 18:50:07.434201   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 18:50:07.434230   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 18:50:07.434240   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 18:50:07.434249   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 18:50:07.434265   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 18:50:07.434275   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 18:50:07.434284   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 18:50:07.434336   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 18:50:07.434374   24262 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 18:50:07.434382   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 18:50:07.434401   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 18:50:07.434422   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 18:50:07.434442   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 18:50:07.434478   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:50:07.434510   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:50:07.434523   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 18:50:07.434534   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 18:50:07.435103   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 18:50:07.471231   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 18:50:07.501869   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 18:50:07.532288   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 18:50:07.562851   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 18:50:07.592410   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 18:50:07.622943   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 18:50:07.657028   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0425 18:50:07.685926   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 18:50:07.721853   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 18:50:07.753558   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 18:50:07.781706   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 18:50:07.801530   24262 ssh_runner.go:195] Run: openssl version
	I0425 18:50:07.808002   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 18:50:07.820553   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:50:07.825983   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:50:07.826031   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:50:07.832602   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 18:50:07.845512   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 18:50:07.858541   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 18:50:07.864166   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 18:50:07.864244   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 18:50:07.871451   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 18:50:07.885597   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 18:50:07.898895   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 18:50:07.904401   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 18:50:07.904471   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 18:50:07.911312   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 18:50:07.923983   24262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 18:50:07.929153   24262 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 18:50:07.929237   24262 kubeadm.go:391] StartCluster: {Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:50:07.929313   24262 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 18:50:07.929374   24262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 18:50:07.972658   24262 cri.go:89] found id: ""
	I0425 18:50:07.972745   24262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0425 18:50:07.983957   24262 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 18:50:07.995766   24262 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 18:50:08.007742   24262 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 18:50:08.007772   24262 kubeadm.go:156] found existing configuration files:
	
	I0425 18:50:08.007813   24262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 18:50:08.018888   24262 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 18:50:08.018948   24262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 18:50:08.030479   24262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 18:50:08.041039   24262 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 18:50:08.041109   24262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 18:50:08.052211   24262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 18:50:08.062770   24262 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 18:50:08.062883   24262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 18:50:08.073789   24262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 18:50:08.084165   24262 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 18:50:08.084235   24262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 18:50:08.095495   24262 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 18:50:08.219358   24262 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 18:50:08.219429   24262 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 18:50:08.354045   24262 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 18:50:08.354184   24262 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 18:50:08.354289   24262 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 18:50:08.627745   24262 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 18:50:08.807241   24262 out.go:204]   - Generating certificates and keys ...
	I0425 18:50:08.807347   24262 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 18:50:08.807427   24262 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 18:50:08.807491   24262 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0425 18:50:08.876352   24262 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0425 18:50:09.019219   24262 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0425 18:50:09.229578   24262 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0425 18:50:09.612187   24262 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0425 18:50:09.612367   24262 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-912667 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I0425 18:50:09.720142   24262 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0425 18:50:09.720471   24262 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-912667 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I0425 18:50:09.944095   24262 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0425 18:50:10.141302   24262 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0425 18:50:10.311087   24262 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0425 18:50:10.311154   24262 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 18:50:10.428002   24262 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 18:50:10.732361   24262 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 18:50:11.005871   24262 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 18:50:11.228112   24262 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 18:50:11.451352   24262 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 18:50:11.452350   24262 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 18:50:11.455653   24262 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 18:50:11.457640   24262 out.go:204]   - Booting up control plane ...
	I0425 18:50:11.457748   24262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 18:50:11.457840   24262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 18:50:11.457954   24262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 18:50:11.476021   24262 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 18:50:11.476125   24262 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 18:50:11.476210   24262 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 18:50:11.616297   24262 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 18:50:11.616387   24262 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 18:50:12.118062   24262 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.859749ms
	I0425 18:50:12.118201   24262 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 18:50:18.104952   24262 kubeadm.go:309] [api-check] The API server is healthy after 5.988219274s
	I0425 18:50:18.122983   24262 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 18:50:18.139515   24262 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 18:50:18.177717   24262 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 18:50:18.177976   24262 kubeadm.go:309] [mark-control-plane] Marking the node ha-912667 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 18:50:18.194139   24262 kubeadm.go:309] [bootstrap-token] Using token: oba30z.3wm2lnpm5w9re787
	I0425 18:50:18.195616   24262 out.go:204]   - Configuring RBAC rules ...
	I0425 18:50:18.195712   24262 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 18:50:18.200271   24262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 18:50:18.219703   24262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 18:50:18.223552   24262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 18:50:18.227647   24262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 18:50:18.231336   24262 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 18:50:18.513584   24262 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 18:50:18.960692   24262 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 18:50:19.512641   24262 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 18:50:19.513740   24262 kubeadm.go:309] 
	I0425 18:50:19.513824   24262 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 18:50:19.513833   24262 kubeadm.go:309] 
	I0425 18:50:19.513916   24262 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 18:50:19.513953   24262 kubeadm.go:309] 
	I0425 18:50:19.513992   24262 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 18:50:19.514083   24262 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 18:50:19.514170   24262 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 18:50:19.514187   24262 kubeadm.go:309] 
	I0425 18:50:19.514265   24262 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 18:50:19.514275   24262 kubeadm.go:309] 
	I0425 18:50:19.514329   24262 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 18:50:19.514340   24262 kubeadm.go:309] 
	I0425 18:50:19.514404   24262 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 18:50:19.514528   24262 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 18:50:19.514615   24262 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 18:50:19.514627   24262 kubeadm.go:309] 
	I0425 18:50:19.514747   24262 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 18:50:19.514870   24262 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 18:50:19.514881   24262 kubeadm.go:309] 
	I0425 18:50:19.514986   24262 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oba30z.3wm2lnpm5w9re787 \
	I0425 18:50:19.515127   24262 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed \
	I0425 18:50:19.515158   24262 kubeadm.go:309] 	--control-plane 
	I0425 18:50:19.515168   24262 kubeadm.go:309] 
	I0425 18:50:19.515311   24262 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 18:50:19.515324   24262 kubeadm.go:309] 
	I0425 18:50:19.515438   24262 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oba30z.3wm2lnpm5w9re787 \
	I0425 18:50:19.515578   24262 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed 
	I0425 18:50:19.516099   24262 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 18:50:19.516137   24262 cni.go:84] Creating CNI manager for ""
	I0425 18:50:19.516152   24262 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0425 18:50:19.518049   24262 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0425 18:50:19.519335   24262 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0425 18:50:19.527699   24262 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0425 18:50:19.527721   24262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0425 18:50:19.548772   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0425 18:50:19.991508   24262 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 18:50:19.991581   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:19.991610   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-912667 minikube.k8s.io/updated_at=2024_04_25T18_50_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=ha-912667 minikube.k8s.io/primary=true
	I0425 18:50:20.177477   24262 ops.go:34] apiserver oom_adj: -16
	I0425 18:50:20.177581   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:20.677787   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:21.178114   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:21.678570   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:22.178351   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:22.678534   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:23.178462   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:23.678536   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:24.177821   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:24.678601   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:25.178106   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:25.678571   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:26.178062   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:26.678017   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:27.177671   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:27.678349   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:28.177659   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:28.678268   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:29.178328   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:29.678429   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:30.177775   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:30.678518   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:31.177997   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:31.678560   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:32.177910   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:32.274129   24262 kubeadm.go:1107] duration metric: took 12.282619837s to wait for elevateKubeSystemPrivileges
	W0425 18:50:32.274167   24262 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 18:50:32.274174   24262 kubeadm.go:393] duration metric: took 24.34494449s to StartCluster
	I0425 18:50:32.274189   24262 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:32.274260   24262 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:50:32.274925   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:32.275140   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0425 18:50:32.275171   24262 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:50:32.275191   24262 start.go:240] waiting for startup goroutines ...
	I0425 18:50:32.275212   24262 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 18:50:32.275285   24262 addons.go:69] Setting storage-provisioner=true in profile "ha-912667"
	I0425 18:50:32.275298   24262 addons.go:69] Setting default-storageclass=true in profile "ha-912667"
	I0425 18:50:32.275316   24262 addons.go:234] Setting addon storage-provisioner=true in "ha-912667"
	I0425 18:50:32.275331   24262 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-912667"
	I0425 18:50:32.275343   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:50:32.275457   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:50:32.275754   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:32.275788   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:32.275808   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:32.275818   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:32.291077   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45781
	I0425 18:50:32.291104   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37037
	I0425 18:50:32.291597   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:32.291599   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:32.292141   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:32.292189   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:32.292297   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:32.292340   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:32.292508   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:32.292632   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:32.292681   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:50:32.293167   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:32.293213   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:32.294845   24262 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:50:32.295113   24262 kapi.go:59] client config for ha-912667: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt", KeyFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key", CAFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0425 18:50:32.295699   24262 cert_rotation.go:137] Starting client certificate rotation controller
	I0425 18:50:32.295824   24262 addons.go:234] Setting addon default-storageclass=true in "ha-912667"
	I0425 18:50:32.295865   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:50:32.296164   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:32.296202   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:32.307958   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I0425 18:50:32.308411   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:32.308899   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:32.308918   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:32.309254   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:32.309458   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:50:32.309940   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0425 18:50:32.310308   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:32.310783   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:32.310808   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:32.311115   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:32.311249   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:32.313220   24262 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 18:50:32.311669   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:32.314378   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:32.314466   24262 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 18:50:32.314486   24262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 18:50:32.314502   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:32.317152   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:32.317594   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:32.317630   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:32.317726   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:32.317886   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:32.318037   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:32.318199   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:32.328812   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0425 18:50:32.329168   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:32.329593   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:32.329615   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:32.329904   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:32.330052   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:50:32.331552   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:32.331784   24262 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 18:50:32.331797   24262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 18:50:32.331807   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:32.334376   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:32.334730   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:32.334754   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:32.334859   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:32.334993   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:32.335150   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:32.335261   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:32.475850   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0425 18:50:32.523374   24262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 18:50:32.535737   24262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 18:50:33.005661   24262 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0425 18:50:33.005783   24262 main.go:141] libmachine: Making call to close driver server
	I0425 18:50:33.005808   24262 main.go:141] libmachine: (ha-912667) Calling .Close
	I0425 18:50:33.006119   24262 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:50:33.006137   24262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:50:33.006145   24262 main.go:141] libmachine: Making call to close driver server
	I0425 18:50:33.006152   24262 main.go:141] libmachine: (ha-912667) Calling .Close
	I0425 18:50:33.006385   24262 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:50:33.006405   24262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:50:33.006514   24262 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0425 18:50:33.006521   24262 round_trippers.go:469] Request Headers:
	I0425 18:50:33.006545   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:50:33.006554   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:50:33.015352   24262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0425 18:50:33.015907   24262 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0425 18:50:33.015925   24262 round_trippers.go:469] Request Headers:
	I0425 18:50:33.015935   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:50:33.015943   24262 round_trippers.go:473]     Content-Type: application/json
	I0425 18:50:33.015947   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:50:33.018615   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:50:33.018824   24262 main.go:141] libmachine: Making call to close driver server
	I0425 18:50:33.018839   24262 main.go:141] libmachine: (ha-912667) Calling .Close
	I0425 18:50:33.019075   24262 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:50:33.019098   24262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:50:33.019111   24262 main.go:141] libmachine: (ha-912667) DBG | Closing plugin on server side
	I0425 18:50:33.368359   24262 main.go:141] libmachine: Making call to close driver server
	I0425 18:50:33.368387   24262 main.go:141] libmachine: (ha-912667) Calling .Close
	I0425 18:50:33.368681   24262 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:50:33.368696   24262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:50:33.368706   24262 main.go:141] libmachine: Making call to close driver server
	I0425 18:50:33.368737   24262 main.go:141] libmachine: (ha-912667) DBG | Closing plugin on server side
	I0425 18:50:33.368784   24262 main.go:141] libmachine: (ha-912667) Calling .Close
	I0425 18:50:33.369019   24262 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:50:33.369043   24262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:50:33.369046   24262 main.go:141] libmachine: (ha-912667) DBG | Closing plugin on server side
	I0425 18:50:33.370964   24262 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0425 18:50:33.371954   24262 addons.go:505] duration metric: took 1.09675326s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0425 18:50:33.371989   24262 start.go:245] waiting for cluster config update ...
	I0425 18:50:33.372004   24262 start.go:254] writing updated cluster config ...
	I0425 18:50:33.373842   24262 out.go:177] 
	I0425 18:50:33.375782   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:50:33.375868   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:50:33.377584   24262 out.go:177] * Starting "ha-912667-m02" control-plane node in "ha-912667" cluster
	I0425 18:50:33.379119   24262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:50:33.379141   24262 cache.go:56] Caching tarball of preloaded images
	I0425 18:50:33.379250   24262 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 18:50:33.379270   24262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 18:50:33.379334   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:50:33.379483   24262 start.go:360] acquireMachinesLock for ha-912667-m02: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 18:50:33.379525   24262 start.go:364] duration metric: took 22.545µs to acquireMachinesLock for "ha-912667-m02"
	I0425 18:50:33.379541   24262 start.go:93] Provisioning new machine with config: &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:50:33.379637   24262 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0425 18:50:33.381229   24262 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0425 18:50:33.381301   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:33.381332   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:33.396569   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I0425 18:50:33.396990   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:33.397539   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:33.397565   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:33.397874   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:33.398090   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetMachineName
	I0425 18:50:33.398285   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:50:33.398469   24262 start.go:159] libmachine.API.Create for "ha-912667" (driver="kvm2")
	I0425 18:50:33.398502   24262 client.go:168] LocalClient.Create starting
	I0425 18:50:33.398540   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 18:50:33.398580   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:50:33.398600   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:50:33.398664   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 18:50:33.398712   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:50:33.398732   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:50:33.398767   24262 main.go:141] libmachine: Running pre-create checks...
	I0425 18:50:33.398778   24262 main.go:141] libmachine: (ha-912667-m02) Calling .PreCreateCheck
	I0425 18:50:33.398958   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetConfigRaw
	I0425 18:50:33.399414   24262 main.go:141] libmachine: Creating machine...
	I0425 18:50:33.399432   24262 main.go:141] libmachine: (ha-912667-m02) Calling .Create
	I0425 18:50:33.399550   24262 main.go:141] libmachine: (ha-912667-m02) Creating KVM machine...
	I0425 18:50:33.400783   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found existing default KVM network
	I0425 18:50:33.400926   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found existing private KVM network mk-ha-912667
	I0425 18:50:33.401066   24262 main.go:141] libmachine: (ha-912667-m02) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02 ...
	I0425 18:50:33.401086   24262 main.go:141] libmachine: (ha-912667-m02) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 18:50:33.401153   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:33.401064   24677 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:50:33.401270   24262 main.go:141] libmachine: (ha-912667-m02) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 18:50:33.624278   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:33.624161   24677 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa...
	I0425 18:50:33.767748   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:33.767636   24677 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/ha-912667-m02.rawdisk...
	I0425 18:50:33.767776   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Writing magic tar header
	I0425 18:50:33.767816   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Writing SSH key tar header
	I0425 18:50:33.767835   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:33.767749   24677 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02 ...
	I0425 18:50:33.767892   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02
	I0425 18:50:33.767922   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02 (perms=drwx------)
	I0425 18:50:33.767938   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 18:50:33.767957   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:50:33.767971   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 18:50:33.767986   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 18:50:33.767995   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins
	I0425 18:50:33.768006   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home
	I0425 18:50:33.768030   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 18:50:33.768043   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Skipping /home - not owner
	I0425 18:50:33.768056   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 18:50:33.768068   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 18:50:33.768082   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 18:50:33.768094   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 18:50:33.768104   24262 main.go:141] libmachine: (ha-912667-m02) Creating domain...
	I0425 18:50:33.769015   24262 main.go:141] libmachine: (ha-912667-m02) define libvirt domain using xml: 
	I0425 18:50:33.769033   24262 main.go:141] libmachine: (ha-912667-m02) <domain type='kvm'>
	I0425 18:50:33.769059   24262 main.go:141] libmachine: (ha-912667-m02)   <name>ha-912667-m02</name>
	I0425 18:50:33.769081   24262 main.go:141] libmachine: (ha-912667-m02)   <memory unit='MiB'>2200</memory>
	I0425 18:50:33.769117   24262 main.go:141] libmachine: (ha-912667-m02)   <vcpu>2</vcpu>
	I0425 18:50:33.769140   24262 main.go:141] libmachine: (ha-912667-m02)   <features>
	I0425 18:50:33.769152   24262 main.go:141] libmachine: (ha-912667-m02)     <acpi/>
	I0425 18:50:33.769163   24262 main.go:141] libmachine: (ha-912667-m02)     <apic/>
	I0425 18:50:33.769193   24262 main.go:141] libmachine: (ha-912667-m02)     <pae/>
	I0425 18:50:33.769215   24262 main.go:141] libmachine: (ha-912667-m02)     
	I0425 18:50:33.769227   24262 main.go:141] libmachine: (ha-912667-m02)   </features>
	I0425 18:50:33.769238   24262 main.go:141] libmachine: (ha-912667-m02)   <cpu mode='host-passthrough'>
	I0425 18:50:33.769249   24262 main.go:141] libmachine: (ha-912667-m02)   
	I0425 18:50:33.769258   24262 main.go:141] libmachine: (ha-912667-m02)   </cpu>
	I0425 18:50:33.769272   24262 main.go:141] libmachine: (ha-912667-m02)   <os>
	I0425 18:50:33.769282   24262 main.go:141] libmachine: (ha-912667-m02)     <type>hvm</type>
	I0425 18:50:33.769291   24262 main.go:141] libmachine: (ha-912667-m02)     <boot dev='cdrom'/>
	I0425 18:50:33.769302   24262 main.go:141] libmachine: (ha-912667-m02)     <boot dev='hd'/>
	I0425 18:50:33.769312   24262 main.go:141] libmachine: (ha-912667-m02)     <bootmenu enable='no'/>
	I0425 18:50:33.769326   24262 main.go:141] libmachine: (ha-912667-m02)   </os>
	I0425 18:50:33.769338   24262 main.go:141] libmachine: (ha-912667-m02)   <devices>
	I0425 18:50:33.769347   24262 main.go:141] libmachine: (ha-912667-m02)     <disk type='file' device='cdrom'>
	I0425 18:50:33.769359   24262 main.go:141] libmachine: (ha-912667-m02)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/boot2docker.iso'/>
	I0425 18:50:33.769371   24262 main.go:141] libmachine: (ha-912667-m02)       <target dev='hdc' bus='scsi'/>
	I0425 18:50:33.769395   24262 main.go:141] libmachine: (ha-912667-m02)       <readonly/>
	I0425 18:50:33.769405   24262 main.go:141] libmachine: (ha-912667-m02)     </disk>
	I0425 18:50:33.769425   24262 main.go:141] libmachine: (ha-912667-m02)     <disk type='file' device='disk'>
	I0425 18:50:33.769446   24262 main.go:141] libmachine: (ha-912667-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 18:50:33.769471   24262 main.go:141] libmachine: (ha-912667-m02)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/ha-912667-m02.rawdisk'/>
	I0425 18:50:33.769490   24262 main.go:141] libmachine: (ha-912667-m02)       <target dev='hda' bus='virtio'/>
	I0425 18:50:33.769503   24262 main.go:141] libmachine: (ha-912667-m02)     </disk>
	I0425 18:50:33.769515   24262 main.go:141] libmachine: (ha-912667-m02)     <interface type='network'>
	I0425 18:50:33.769527   24262 main.go:141] libmachine: (ha-912667-m02)       <source network='mk-ha-912667'/>
	I0425 18:50:33.769536   24262 main.go:141] libmachine: (ha-912667-m02)       <model type='virtio'/>
	I0425 18:50:33.769548   24262 main.go:141] libmachine: (ha-912667-m02)     </interface>
	I0425 18:50:33.769565   24262 main.go:141] libmachine: (ha-912667-m02)     <interface type='network'>
	I0425 18:50:33.769578   24262 main.go:141] libmachine: (ha-912667-m02)       <source network='default'/>
	I0425 18:50:33.769589   24262 main.go:141] libmachine: (ha-912667-m02)       <model type='virtio'/>
	I0425 18:50:33.769601   24262 main.go:141] libmachine: (ha-912667-m02)     </interface>
	I0425 18:50:33.769610   24262 main.go:141] libmachine: (ha-912667-m02)     <serial type='pty'>
	I0425 18:50:33.769623   24262 main.go:141] libmachine: (ha-912667-m02)       <target port='0'/>
	I0425 18:50:33.769633   24262 main.go:141] libmachine: (ha-912667-m02)     </serial>
	I0425 18:50:33.769645   24262 main.go:141] libmachine: (ha-912667-m02)     <console type='pty'>
	I0425 18:50:33.769659   24262 main.go:141] libmachine: (ha-912667-m02)       <target type='serial' port='0'/>
	I0425 18:50:33.769670   24262 main.go:141] libmachine: (ha-912667-m02)     </console>
	I0425 18:50:33.769678   24262 main.go:141] libmachine: (ha-912667-m02)     <rng model='virtio'>
	I0425 18:50:33.769687   24262 main.go:141] libmachine: (ha-912667-m02)       <backend model='random'>/dev/random</backend>
	I0425 18:50:33.769697   24262 main.go:141] libmachine: (ha-912667-m02)     </rng>
	I0425 18:50:33.769706   24262 main.go:141] libmachine: (ha-912667-m02)     
	I0425 18:50:33.769715   24262 main.go:141] libmachine: (ha-912667-m02)     
	I0425 18:50:33.769727   24262 main.go:141] libmachine: (ha-912667-m02)   </devices>
	I0425 18:50:33.769741   24262 main.go:141] libmachine: (ha-912667-m02) </domain>
	I0425 18:50:33.769772   24262 main.go:141] libmachine: (ha-912667-m02) 
	I0425 18:50:33.776550   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:13:07:ce in network default
	I0425 18:50:33.777140   24262 main.go:141] libmachine: (ha-912667-m02) Ensuring networks are active...
	I0425 18:50:33.777162   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:33.777943   24262 main.go:141] libmachine: (ha-912667-m02) Ensuring network default is active
	I0425 18:50:33.778329   24262 main.go:141] libmachine: (ha-912667-m02) Ensuring network mk-ha-912667 is active
	I0425 18:50:33.778759   24262 main.go:141] libmachine: (ha-912667-m02) Getting domain xml...
	I0425 18:50:33.779585   24262 main.go:141] libmachine: (ha-912667-m02) Creating domain...
	I0425 18:50:35.015579   24262 main.go:141] libmachine: (ha-912667-m02) Waiting to get IP...
	I0425 18:50:35.016401   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:35.016845   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:35.016875   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:35.016821   24677 retry.go:31] will retry after 272.31751ms: waiting for machine to come up
	I0425 18:50:35.290272   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:35.290859   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:35.290889   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:35.290809   24677 retry.go:31] will retry after 355.818103ms: waiting for machine to come up
	I0425 18:50:35.648332   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:35.648726   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:35.648764   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:35.648674   24677 retry.go:31] will retry after 313.196477ms: waiting for machine to come up
	I0425 18:50:35.962837   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:35.963324   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:35.963354   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:35.963277   24677 retry.go:31] will retry after 447.300584ms: waiting for machine to come up
	I0425 18:50:36.411853   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:36.412326   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:36.412350   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:36.412288   24677 retry.go:31] will retry after 735.041089ms: waiting for machine to come up
	I0425 18:50:37.148697   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:37.149163   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:37.149207   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:37.149105   24677 retry.go:31] will retry after 790.482572ms: waiting for machine to come up
	I0425 18:50:37.940815   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:37.941179   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:37.941227   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:37.941138   24677 retry.go:31] will retry after 838.320133ms: waiting for machine to come up
	I0425 18:50:38.780783   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:38.781250   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:38.781276   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:38.781217   24677 retry.go:31] will retry after 1.393143408s: waiting for machine to come up
	I0425 18:50:40.176650   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:40.177058   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:40.177082   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:40.177019   24677 retry.go:31] will retry after 1.382169864s: waiting for machine to come up
	I0425 18:50:41.560741   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:41.561116   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:41.561162   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:41.561079   24677 retry.go:31] will retry after 1.653935327s: waiting for machine to come up
	I0425 18:50:43.216296   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:43.216713   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:43.216737   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:43.216679   24677 retry.go:31] will retry after 1.806231222s: waiting for machine to come up
	I0425 18:50:45.024850   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:45.025330   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:45.025378   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:45.025319   24677 retry.go:31] will retry after 3.576127864s: waiting for machine to come up
	I0425 18:50:48.603197   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:48.603520   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:48.603551   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:48.603473   24677 retry.go:31] will retry after 3.829916567s: waiting for machine to come up
	I0425 18:50:52.437454   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:52.437860   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:52.437890   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:52.437815   24677 retry.go:31] will retry after 4.932389568s: waiting for machine to come up
	I0425 18:50:57.371779   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:57.372290   24262 main.go:141] libmachine: (ha-912667-m02) Found IP for machine: 192.168.39.66
	I0425 18:50:57.372327   24262 main.go:141] libmachine: (ha-912667-m02) Reserving static IP address...
	I0425 18:50:57.372341   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has current primary IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:57.372644   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find host DHCP lease matching {name: "ha-912667-m02", mac: "52:54:00:5a:58:a0", ip: "192.168.39.66"} in network mk-ha-912667
	I0425 18:50:57.442440   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Getting to WaitForSSH function...
	I0425 18:50:57.442470   24262 main.go:141] libmachine: (ha-912667-m02) Reserved static IP address: 192.168.39.66
	I0425 18:50:57.442485   24262 main.go:141] libmachine: (ha-912667-m02) Waiting for SSH to be available...
	I0425 18:50:57.444830   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:57.445165   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667
	I0425 18:50:57.445197   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find defined IP address of network mk-ha-912667 interface with MAC address 52:54:00:5a:58:a0
	I0425 18:50:57.445339   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Using SSH client type: external
	I0425 18:50:57.445364   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa (-rw-------)
	I0425 18:50:57.445403   24262 main.go:141] libmachine: (ha-912667-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 18:50:57.445421   24262 main.go:141] libmachine: (ha-912667-m02) DBG | About to run SSH command:
	I0425 18:50:57.445448   24262 main.go:141] libmachine: (ha-912667-m02) DBG | exit 0
	I0425 18:50:57.448897   24262 main.go:141] libmachine: (ha-912667-m02) DBG | SSH cmd err, output: exit status 255: 
	I0425 18:50:57.448918   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0425 18:50:57.448934   24262 main.go:141] libmachine: (ha-912667-m02) DBG | command : exit 0
	I0425 18:50:57.448944   24262 main.go:141] libmachine: (ha-912667-m02) DBG | err     : exit status 255
	I0425 18:50:57.448958   24262 main.go:141] libmachine: (ha-912667-m02) DBG | output  : 
	I0425 18:51:00.449130   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Getting to WaitForSSH function...
	I0425 18:51:00.451492   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.451852   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:00.451879   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.452040   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Using SSH client type: external
	I0425 18:51:00.452066   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa (-rw-------)
	I0425 18:51:00.452099   24262 main.go:141] libmachine: (ha-912667-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 18:51:00.452116   24262 main.go:141] libmachine: (ha-912667-m02) DBG | About to run SSH command:
	I0425 18:51:00.452125   24262 main.go:141] libmachine: (ha-912667-m02) DBG | exit 0
	I0425 18:51:00.582574   24262 main.go:141] libmachine: (ha-912667-m02) DBG | SSH cmd err, output: <nil>: 
	I0425 18:51:00.582868   24262 main.go:141] libmachine: (ha-912667-m02) KVM machine creation complete!
	I0425 18:51:00.583228   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetConfigRaw
	I0425 18:51:00.583839   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:00.584002   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:00.584136   24262 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 18:51:00.584148   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 18:51:00.585297   24262 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 18:51:00.585311   24262 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 18:51:00.585317   24262 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 18:51:00.585324   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:00.587757   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.588116   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:00.588152   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.588285   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:00.588474   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.588663   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.588826   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:00.588976   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:00.589188   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:00.589203   24262 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 18:51:00.701950   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:51:00.701976   24262 main.go:141] libmachine: Detecting the provisioner...
	I0425 18:51:00.701985   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:00.704856   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.705163   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:00.705192   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.705338   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:00.705524   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.705719   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.705917   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:00.706078   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:00.706313   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:00.706329   24262 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 18:51:00.816075   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 18:51:00.816133   24262 main.go:141] libmachine: found compatible host: buildroot
	I0425 18:51:00.816140   24262 main.go:141] libmachine: Provisioning with buildroot...
	I0425 18:51:00.816147   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetMachineName
	I0425 18:51:00.816416   24262 buildroot.go:166] provisioning hostname "ha-912667-m02"
	I0425 18:51:00.816446   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetMachineName
	I0425 18:51:00.816639   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:00.819389   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.819767   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:00.819799   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.819979   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:00.820161   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.820323   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.820446   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:00.820601   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:00.820788   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:00.820801   24262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-912667-m02 && echo "ha-912667-m02" | sudo tee /etc/hostname
	I0425 18:51:00.951114   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-912667-m02
	
	I0425 18:51:00.951147   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:00.953844   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.954310   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:00.954338   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.954491   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:00.954667   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.954817   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.954923   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:00.955121   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:00.955274   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:00.955291   24262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-912667-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-912667-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-912667-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 18:51:01.076905   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:51:01.076933   24262 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 18:51:01.076949   24262 buildroot.go:174] setting up certificates
	I0425 18:51:01.076957   24262 provision.go:84] configureAuth start
	I0425 18:51:01.076965   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetMachineName
	I0425 18:51:01.077193   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:51:01.079866   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.080221   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.080248   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.080368   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.082445   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.082727   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.082759   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.082853   24262 provision.go:143] copyHostCerts
	I0425 18:51:01.082876   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:51:01.082911   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 18:51:01.082925   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:51:01.082987   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 18:51:01.083083   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:51:01.083104   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 18:51:01.083109   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:51:01.083133   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 18:51:01.083188   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:51:01.083204   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 18:51:01.083211   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:51:01.083231   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 18:51:01.083273   24262 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.ha-912667-m02 san=[127.0.0.1 192.168.39.66 ha-912667-m02 localhost minikube]
	I0425 18:51:01.174452   24262 provision.go:177] copyRemoteCerts
	I0425 18:51:01.174508   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 18:51:01.174533   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.177076   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.177364   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.177388   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.177531   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.177722   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.177881   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.177995   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	I0425 18:51:01.265418   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 18:51:01.265487   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0425 18:51:01.301867   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 18:51:01.301936   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 18:51:01.329938   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 18:51:01.330007   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 18:51:01.357857   24262 provision.go:87] duration metric: took 280.886715ms to configureAuth
	I0425 18:51:01.357890   24262 buildroot.go:189] setting minikube options for container-runtime
	I0425 18:51:01.358063   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:51:01.358152   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.360692   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.361069   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.361100   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.361283   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.361511   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.361697   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.361874   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.362046   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:01.362236   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:01.362253   24262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 18:51:01.652870   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 18:51:01.652908   24262 main.go:141] libmachine: Checking connection to Docker...
	I0425 18:51:01.652918   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetURL
	I0425 18:51:01.654109   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Using libvirt version 6000000
	I0425 18:51:01.656105   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.656321   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.656342   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.656517   24262 main.go:141] libmachine: Docker is up and running!
	I0425 18:51:01.656529   24262 main.go:141] libmachine: Reticulating splines...
	I0425 18:51:01.656536   24262 client.go:171] duration metric: took 28.258024153s to LocalClient.Create
	I0425 18:51:01.656555   24262 start.go:167] duration metric: took 28.25808827s to libmachine.API.Create "ha-912667"
	I0425 18:51:01.656564   24262 start.go:293] postStartSetup for "ha-912667-m02" (driver="kvm2")
	I0425 18:51:01.656572   24262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 18:51:01.656589   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:01.656809   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 18:51:01.656830   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.658688   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.658975   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.659018   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.659091   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.659243   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.659380   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.659504   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	I0425 18:51:01.745446   24262 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 18:51:01.750300   24262 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 18:51:01.750323   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 18:51:01.750381   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 18:51:01.750445   24262 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 18:51:01.750457   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 18:51:01.750533   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 18:51:01.760679   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:51:01.788079   24262 start.go:296] duration metric: took 131.502365ms for postStartSetup
	I0425 18:51:01.788129   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetConfigRaw
	I0425 18:51:01.788753   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:51:01.791276   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.791619   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.791641   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.791921   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:51:01.792166   24262 start.go:128] duration metric: took 28.412517698s to createHost
	I0425 18:51:01.792190   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.794775   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.795128   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.795154   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.795356   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.795558   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.795702   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.795863   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.796007   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:01.796177   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:01.796191   24262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 18:51:01.907571   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714071061.895155103
	
	I0425 18:51:01.907590   24262 fix.go:216] guest clock: 1714071061.895155103
	I0425 18:51:01.907596   24262 fix.go:229] Guest: 2024-04-25 18:51:01.895155103 +0000 UTC Remote: 2024-04-25 18:51:01.792180512 +0000 UTC m=+86.367594385 (delta=102.974591ms)
	I0425 18:51:01.907613   24262 fix.go:200] guest clock delta is within tolerance: 102.974591ms
	I0425 18:51:01.907620   24262 start.go:83] releasing machines lock for "ha-912667-m02", held for 28.528086055s
	I0425 18:51:01.907640   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:01.907925   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:51:01.910373   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.910676   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.910705   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.913168   24262 out.go:177] * Found network options:
	I0425 18:51:01.914669   24262 out.go:177]   - NO_PROXY=192.168.39.189
	W0425 18:51:01.915767   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 18:51:01.915815   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:01.916457   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:01.916686   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:01.916774   24262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 18:51:01.916815   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	W0425 18:51:01.916848   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 18:51:01.916923   24262 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 18:51:01.916946   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.919610   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.919905   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.919988   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.920014   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.920133   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.920312   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.920336   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.920352   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.920477   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.920625   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.920681   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	I0425 18:51:01.920964   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.921126   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.921293   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	I0425 18:51:02.161922   24262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 18:51:02.168965   24262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 18:51:02.169031   24262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 18:51:02.187890   24262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 18:51:02.187925   24262 start.go:494] detecting cgroup driver to use...
	I0425 18:51:02.187998   24262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 18:51:02.205507   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 18:51:02.221282   24262 docker.go:217] disabling cri-docker service (if available) ...
	I0425 18:51:02.221340   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 18:51:02.239998   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 18:51:02.256143   24262 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 18:51:02.383796   24262 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 18:51:02.546378   24262 docker.go:233] disabling docker service ...
	I0425 18:51:02.546439   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 18:51:02.564419   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 18:51:02.580135   24262 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 18:51:02.732786   24262 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 18:51:02.858389   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 18:51:02.875385   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 18:51:02.897227   24262 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 18:51:02.897285   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.908319   24262 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 18:51:02.908366   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.920325   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.932150   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.944074   24262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 18:51:02.956417   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.968165   24262 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.988373   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.999369   24262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 18:51:03.008969   24262 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 18:51:03.009010   24262 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 18:51:03.023941   24262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 18:51:03.034370   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:51:03.166610   24262 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 18:51:03.319627   24262 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 18:51:03.319697   24262 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 18:51:03.324957   24262 start.go:562] Will wait 60s for crictl version
	I0425 18:51:03.325023   24262 ssh_runner.go:195] Run: which crictl
	I0425 18:51:03.329276   24262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 18:51:03.369309   24262 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 18:51:03.369393   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:51:03.402343   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:51:03.434551   24262 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 18:51:03.435880   24262 out.go:177]   - env NO_PROXY=192.168.39.189
	I0425 18:51:03.437106   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:51:03.439538   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:03.439878   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:03.439904   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:03.440103   24262 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 18:51:03.444466   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:51:03.458794   24262 mustload.go:65] Loading cluster: ha-912667
	I0425 18:51:03.458962   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:51:03.459232   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:51:03.459264   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:51:03.474332   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0425 18:51:03.474706   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:51:03.475141   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:51:03.475159   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:51:03.475482   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:51:03.475659   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:51:03.476988   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:51:03.477290   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:51:03.477314   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:51:03.491072   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33319
	I0425 18:51:03.491489   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:51:03.491914   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:51:03.491934   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:51:03.492166   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:51:03.492290   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:51:03.492452   24262 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667 for IP: 192.168.39.66
	I0425 18:51:03.492465   24262 certs.go:194] generating shared ca certs ...
	I0425 18:51:03.492478   24262 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:51:03.492597   24262 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 18:51:03.492640   24262 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 18:51:03.492650   24262 certs.go:256] generating profile certs ...
	I0425 18:51:03.492734   24262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key
	I0425 18:51:03.492758   24262 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.cf2d0a5d
	I0425 18:51:03.492772   24262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.cf2d0a5d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189 192.168.39.66 192.168.39.254]
	I0425 18:51:03.953364   24262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.cf2d0a5d ...
	I0425 18:51:03.953396   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.cf2d0a5d: {Name:mk5137ba25a9fe77d3cb81ec7a2b2234f923a19a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:51:03.953559   24262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.cf2d0a5d ...
	I0425 18:51:03.953578   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.cf2d0a5d: {Name:mk91a6ad2b600314c57d75711856799b66f33329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:51:03.953650   24262 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.cf2d0a5d -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt
	I0425 18:51:03.953780   24262 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.cf2d0a5d -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key
	I0425 18:51:03.953903   24262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key
	I0425 18:51:03.953919   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 18:51:03.953932   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 18:51:03.953942   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 18:51:03.953952   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 18:51:03.953965   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 18:51:03.953975   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 18:51:03.953986   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 18:51:03.953997   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 18:51:03.954041   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 18:51:03.954070   24262 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 18:51:03.954082   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 18:51:03.954111   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 18:51:03.954138   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 18:51:03.954159   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 18:51:03.954194   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:51:03.954239   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:51:03.954254   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 18:51:03.954271   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 18:51:03.954301   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:51:03.957478   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:51:03.957903   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:51:03.957927   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:51:03.958104   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:51:03.958326   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:51:03.958527   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:51:03.958671   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:51:04.034625   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0425 18:51:04.040517   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0425 18:51:04.053076   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0425 18:51:04.058065   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0425 18:51:04.069603   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0425 18:51:04.074596   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0425 18:51:04.086275   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0425 18:51:04.091181   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0425 18:51:04.105497   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0425 18:51:04.110602   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0425 18:51:04.124115   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0425 18:51:04.130248   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0425 18:51:04.143925   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 18:51:04.173909   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 18:51:04.202128   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 18:51:04.230714   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 18:51:04.260359   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0425 18:51:04.288811   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 18:51:04.317244   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 18:51:04.345486   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0425 18:51:04.375240   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 18:51:04.406393   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 18:51:04.434825   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 18:51:04.461976   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0425 18:51:04.481575   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0425 18:51:04.507211   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0425 18:51:04.526733   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0425 18:51:04.545783   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0425 18:51:04.565083   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0425 18:51:04.584386   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0425 18:51:04.603416   24262 ssh_runner.go:195] Run: openssl version
	I0425 18:51:04.609572   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 18:51:04.625232   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:51:04.630553   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:51:04.630609   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:51:04.637817   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 18:51:04.651543   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 18:51:04.666365   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 18:51:04.671559   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 18:51:04.671632   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 18:51:04.678421   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 18:51:04.694278   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 18:51:04.709055   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 18:51:04.714405   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 18:51:04.714466   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 18:51:04.721051   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 18:51:04.734427   24262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 18:51:04.739445   24262 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 18:51:04.739509   24262 kubeadm.go:928] updating node {m02 192.168.39.66 8443 v1.30.0 crio true true} ...
	I0425 18:51:04.739598   24262 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-912667-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 18:51:04.739630   24262 kube-vip.go:111] generating kube-vip config ...
	I0425 18:51:04.739681   24262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0425 18:51:04.759989   24262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0425 18:51:04.760061   24262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0425 18:51:04.760110   24262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 18:51:04.772098   24262 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0425 18:51:04.772159   24262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0425 18:51:04.784264   24262 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0425 18:51:04.784280   24262 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0425 18:51:04.784293   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0425 18:51:04.784313   24262 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0425 18:51:04.784376   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0425 18:51:04.790145   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0425 18:51:04.790180   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0425 18:51:36.325627   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0425 18:51:36.325733   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0425 18:51:36.331308   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0425 18:51:36.331341   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0425 18:52:08.753573   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:52:08.772949   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0425 18:52:08.773028   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0425 18:52:08.779035   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0425 18:52:08.779072   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0425 18:52:09.276520   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0425 18:52:09.288087   24262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0425 18:52:09.310275   24262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 18:52:09.329583   24262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0425 18:52:09.348142   24262 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0425 18:52:09.352733   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:52:09.366360   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:52:09.488599   24262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:52:09.507460   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:52:09.507888   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:52:09.507930   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:52:09.523497   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0425 18:52:09.524076   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:52:09.524576   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:52:09.524613   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:52:09.524992   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:52:09.525213   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:52:09.525389   24262 start.go:316] joinCluster: &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:52:09.525500   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0425 18:52:09.525523   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:52:09.528382   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:52:09.528804   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:52:09.528845   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:52:09.528980   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:52:09.529134   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:52:09.529277   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:52:09.529398   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:52:09.695957   24262 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:52:09.696007   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hjy66t.mflcauxv23x5gsd7 --discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-912667-m02 --control-plane --apiserver-advertise-address=192.168.39.66 --apiserver-bind-port=8443"
	I0425 18:52:33.205575   24262 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hjy66t.mflcauxv23x5gsd7 --discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-912667-m02 --control-plane --apiserver-advertise-address=192.168.39.66 --apiserver-bind-port=8443": (23.509540296s)
	I0425 18:52:33.205620   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0425 18:52:33.729595   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-912667-m02 minikube.k8s.io/updated_at=2024_04_25T18_52_33_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=ha-912667 minikube.k8s.io/primary=false
	I0425 18:52:33.900493   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-912667-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0425 18:52:34.064880   24262 start.go:318] duration metric: took 24.539487846s to joinCluster
	I0425 18:52:34.064959   24262 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:52:34.066637   24262 out.go:177] * Verifying Kubernetes components...
	I0425 18:52:34.065259   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:52:34.068009   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:52:34.342188   24262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:52:34.371769   24262 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:52:34.372092   24262 kapi.go:59] client config for ha-912667: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt", KeyFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key", CAFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0425 18:52:34.372178   24262 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.189:8443
	I0425 18:52:34.372452   24262 node_ready.go:35] waiting up to 6m0s for node "ha-912667-m02" to be "Ready" ...
	I0425 18:52:34.372561   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:34.372572   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:34.372583   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:34.372588   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:34.384927   24262 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0425 18:52:34.873548   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:34.873570   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:34.873578   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:34.873583   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:34.882515   24262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0425 18:52:35.373637   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:35.373659   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:35.373670   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:35.373675   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:35.379726   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:52:35.873343   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:35.873365   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:35.873372   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:35.873376   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:35.877316   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:36.372657   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:36.372680   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:36.372689   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:36.372692   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:36.376310   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:36.377188   24262 node_ready.go:53] node "ha-912667-m02" has status "Ready":"False"
	I0425 18:52:36.872637   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:36.872662   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:36.872670   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:36.872675   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:36.876849   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:37.373610   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:37.373640   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:37.373654   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:37.373659   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:37.377391   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:37.873065   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:37.873087   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:37.873095   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:37.873100   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:37.879292   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:52:38.373547   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:38.373571   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:38.373579   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:38.373583   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:38.378058   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:38.378985   24262 node_ready.go:53] node "ha-912667-m02" has status "Ready":"False"
	I0425 18:52:38.873413   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:38.873436   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:38.873443   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:38.873447   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:38.876991   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:39.373427   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:39.373458   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:39.373469   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:39.373476   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:39.377810   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:39.873420   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:39.873443   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:39.873450   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:39.873455   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:39.877099   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:40.373137   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:40.373220   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:40.373235   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:40.373240   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:40.378474   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:40.379541   24262 node_ready.go:53] node "ha-912667-m02" has status "Ready":"False"
	I0425 18:52:40.873292   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:40.873319   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:40.873330   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:40.873338   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:40.884101   24262 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0425 18:52:41.373410   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:41.373434   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:41.373440   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:41.373444   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:41.379369   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:41.873602   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:41.873629   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:41.873637   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:41.873642   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:41.877191   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:42.373511   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:42.373547   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.373553   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.373559   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.377598   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:42.378463   24262 node_ready.go:49] node "ha-912667-m02" has status "Ready":"True"
	I0425 18:52:42.378481   24262 node_ready.go:38] duration metric: took 8.005998806s for node "ha-912667-m02" to be "Ready" ...
	I0425 18:52:42.378489   24262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:52:42.378545   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:52:42.378555   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.378562   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.378565   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.384147   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:42.391456   24262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.391554   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-22wvx
	I0425 18:52:42.391567   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.391578   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.391587   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.397170   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:42.397948   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:42.397969   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.397978   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.397987   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.401467   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:42.402146   24262 pod_ready.go:92] pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:42.402167   24262 pod_ready.go:81] duration metric: took 10.683846ms for pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.402179   24262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.402262   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-h4s2h
	I0425 18:52:42.402274   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.402284   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.402291   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.405106   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:42.406039   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:42.406053   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.406060   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.406065   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.408563   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:42.409259   24262 pod_ready.go:92] pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:42.409281   24262 pod_ready.go:81] duration metric: took 7.093835ms for pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.409294   24262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.409354   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667
	I0425 18:52:42.409365   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.409374   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.409386   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.412025   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:42.412614   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:42.412627   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.412634   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.412638   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.415013   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:42.415520   24262 pod_ready.go:92] pod "etcd-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:42.415538   24262 pod_ready.go:81] duration metric: took 6.235612ms for pod "etcd-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.415549   24262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.415609   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m02
	I0425 18:52:42.415620   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.415629   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.415639   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.418675   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:42.419657   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:42.419670   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.419680   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.419685   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.427005   24262 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0425 18:52:42.915899   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m02
	I0425 18:52:42.915921   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.915928   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.915933   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.919689   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:42.920328   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:42.920350   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.920374   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.920378   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.923472   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.416409   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m02
	I0425 18:52:43.416430   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.416437   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.416442   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.420157   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.421097   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:43.421117   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.421127   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.421132   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.424546   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.425483   24262 pod_ready.go:92] pod "etcd-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:43.425504   24262 pod_ready.go:81] duration metric: took 1.009946144s for pod "etcd-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:43.425524   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:43.425598   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667
	I0425 18:52:43.425609   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.425618   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.425627   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.428956   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.574106   24262 request.go:629] Waited for 143.841662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:43.574168   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:43.574173   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.574180   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.574184   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.578066   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.578761   24262 pod_ready.go:92] pod "kube-apiserver-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:43.578779   24262 pod_ready.go:81] duration metric: took 153.248043ms for pod "kube-apiserver-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:43.578792   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:43.774216   24262 request.go:629] Waited for 195.339462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:43.774267   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:43.774272   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.774279   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.774283   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.778170   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.974360   24262 request.go:629] Waited for 195.375592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:43.974425   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:43.974432   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.974442   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.974447   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.978839   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:44.173816   24262 request.go:629] Waited for 94.267791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:44.173896   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:44.173908   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:44.173918   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:44.173926   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:44.178191   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:44.374450   24262 request.go:629] Waited for 195.373961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:44.374529   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:44.374534   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:44.374541   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:44.374544   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:44.378227   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:44.579975   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:44.580000   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:44.580013   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:44.580018   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:44.584635   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:44.774535   24262 request.go:629] Waited for 188.388488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:44.774638   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:44.774652   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:44.774661   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:44.774674   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:44.778025   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:45.079676   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:45.079699   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:45.079706   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:45.079709   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:45.083902   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:45.174281   24262 request.go:629] Waited for 89.281344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:45.174348   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:45.174354   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:45.174361   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:45.174364   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:45.178008   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:45.579329   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:45.579357   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:45.579365   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:45.579368   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:45.583223   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:45.584345   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:45.584362   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:45.584369   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:45.584375   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:45.587651   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:45.588479   24262 pod_ready.go:102] pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace has status "Ready":"False"
	I0425 18:52:46.079707   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:46.079728   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:46.079735   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:46.079738   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:46.083723   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:46.084499   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:46.084516   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:46.084532   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:46.084540   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:46.087461   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:46.579409   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:46.579437   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:46.579446   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:46.579452   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:46.583019   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:46.583880   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:46.583899   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:46.583906   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:46.583910   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:46.586780   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:47.080000   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:47.080028   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:47.080036   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:47.080040   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:47.085744   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:47.087650   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:47.087671   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:47.087682   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:47.087687   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:47.091461   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:47.579653   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:47.579679   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:47.579690   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:47.579695   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:47.583170   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:47.584026   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:47.584040   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:47.584047   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:47.584051   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:47.587441   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:48.079499   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:48.079526   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.079537   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.079545   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.083310   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:48.084241   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:48.084259   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.084269   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.084274   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.087226   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:48.087986   24262 pod_ready.go:92] pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:48.088010   24262 pod_ready.go:81] duration metric: took 4.509210477s for pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:48.088023   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:48.088094   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667
	I0425 18:52:48.088106   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.088114   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.088118   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.090857   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:48.091736   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:48.091756   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.091763   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.091767   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.094847   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:48.095386   24262 pod_ready.go:92] pod "kube-controller-manager-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:48.095405   24262 pod_ready.go:81] duration metric: took 7.373536ms for pod "kube-controller-manager-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:48.095414   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:48.173634   24262 request.go:629] Waited for 78.161409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:52:48.173722   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:52:48.173739   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.173748   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.173755   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.177915   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:48.374085   24262 request.go:629] Waited for 195.377261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:48.374141   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:48.374145   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.374153   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.374162   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.378110   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:48.596548   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:52:48.596568   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.596576   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.596581   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.602761   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:52:48.774030   24262 request.go:629] Waited for 170.345255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:48.774080   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:48.774086   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.774093   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.774097   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.777879   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.095701   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:52:49.095738   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.095748   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.095753   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.107464   24262 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0425 18:52:49.174567   24262 request.go:629] Waited for 66.224162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:49.174631   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:49.174646   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.174657   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.174664   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.178072   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.178831   24262 pod_ready.go:92] pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:49.178851   24262 pod_ready.go:81] duration metric: took 1.083431205s for pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:49.178861   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mkgv5" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:49.374276   24262 request.go:629] Waited for 195.361619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkgv5
	I0425 18:52:49.374358   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkgv5
	I0425 18:52:49.374366   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.374373   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.374377   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.377845   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.573854   24262 request.go:629] Waited for 195.222888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:49.573906   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:49.573911   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.573919   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.573923   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.577462   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.578290   24262 pod_ready.go:92] pod "kube-proxy-mkgv5" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:49.578310   24262 pod_ready.go:81] duration metric: took 399.443842ms for pod "kube-proxy-mkgv5" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:49.578326   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rkbcp" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:49.774283   24262 request.go:629] Waited for 195.902176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rkbcp
	I0425 18:52:49.774337   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rkbcp
	I0425 18:52:49.774342   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.774352   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.774402   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.778081   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.974476   24262 request.go:629] Waited for 195.376224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:49.974539   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:49.974544   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.974556   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.974561   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.977955   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.978796   24262 pod_ready.go:92] pod "kube-proxy-rkbcp" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:49.978819   24262 pod_ready.go:81] duration metric: took 400.485794ms for pod "kube-proxy-rkbcp" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:49.978832   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:50.173957   24262 request.go:629] Waited for 195.06393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667
	I0425 18:52:50.174048   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667
	I0425 18:52:50.174062   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.174072   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.174077   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.177434   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:50.373598   24262 request.go:629] Waited for 195.337793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:50.373671   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:50.373676   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.373682   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.373685   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.377154   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:50.378023   24262 pod_ready.go:92] pod "kube-scheduler-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:50.378045   24262 pod_ready.go:81] duration metric: took 399.203687ms for pod "kube-scheduler-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:50.378059   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:50.574460   24262 request.go:629] Waited for 196.320169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m02
	I0425 18:52:50.574518   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m02
	I0425 18:52:50.574526   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.574535   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.574542   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.580026   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:50.774247   24262 request.go:629] Waited for 193.363837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:50.774299   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:50.774305   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.774312   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.774315   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.778005   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:50.778960   24262 pod_ready.go:92] pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:50.778985   24262 pod_ready.go:81] duration metric: took 400.916758ms for pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:50.778999   24262 pod_ready.go:38] duration metric: took 8.400497325s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:52:50.779017   24262 api_server.go:52] waiting for apiserver process to appear ...
	I0425 18:52:50.779077   24262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:52:50.794983   24262 api_server.go:72] duration metric: took 16.729987351s to wait for apiserver process to appear ...
	I0425 18:52:50.795010   24262 api_server.go:88] waiting for apiserver healthz status ...
	I0425 18:52:50.795032   24262 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I0425 18:52:50.799683   24262 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I0425 18:52:50.799759   24262 round_trippers.go:463] GET https://192.168.39.189:8443/version
	I0425 18:52:50.799769   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.799776   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.799780   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.800649   24262 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 18:52:50.800727   24262 api_server.go:141] control plane version: v1.30.0
	I0425 18:52:50.800743   24262 api_server.go:131] duration metric: took 5.726686ms to wait for apiserver health ...
	I0425 18:52:50.800749   24262 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 18:52:50.974161   24262 request.go:629] Waited for 173.32943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:52:50.974234   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:52:50.974242   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.974252   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.974261   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.980896   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:52:50.987446   24262 system_pods.go:59] 17 kube-system pods found
	I0425 18:52:50.987476   24262 system_pods.go:61] "coredns-7db6d8ff4d-22wvx" [56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e] Running
	I0425 18:52:50.987482   24262 system_pods.go:61] "coredns-7db6d8ff4d-h4s2h" [f9e2233c-5350-47ab-bdae-6fa35972b601] Running
	I0425 18:52:50.987486   24262 system_pods.go:61] "etcd-ha-912667" [d18fe5ec-655e-4da4-b8de-782eef846d55] Running
	I0425 18:52:50.987489   24262 system_pods.go:61] "etcd-ha-912667-m02" [8d6782f6-b00b-4d10-8a3a-452460974164] Running
	I0425 18:52:50.987492   24262 system_pods.go:61] "kindnet-sq4lb" [049d5dc9-13ec-4135-8785-229071e57d1a] Running
	I0425 18:52:50.987495   24262 system_pods.go:61] "kindnet-xlvjt" [191ff28e-07d7-459e-afe5-e3d8c23e1016] Running
	I0425 18:52:50.987498   24262 system_pods.go:61] "kube-apiserver-ha-912667" [a8339e9c-d67f-4e84-ba79-754ad86fdf82] Running
	I0425 18:52:50.987501   24262 system_pods.go:61] "kube-apiserver-ha-912667-m02" [a420b2a1-207a-435f-98d2-893836a60e78] Running
	I0425 18:52:50.987508   24262 system_pods.go:61] "kube-controller-manager-ha-912667" [6a91aebd-e142-4165-8acb-cc4c49a5df54] Running
	I0425 18:52:50.987511   24262 system_pods.go:61] "kube-controller-manager-ha-912667-m02" [e94e1a60-af79-4a8e-ac11-e7d36c3d68a3] Running
	I0425 18:52:50.987514   24262 system_pods.go:61] "kube-proxy-mkgv5" [7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a] Running
	I0425 18:52:50.987517   24262 system_pods.go:61] "kube-proxy-rkbcp" [c62d3486-15d6-4398-a397-2f542d8fb074] Running
	I0425 18:52:50.987523   24262 system_pods.go:61] "kube-scheduler-ha-912667" [7dc33762-4bee-467e-9db4-d783ffe04992] Running
	I0425 18:52:50.987526   24262 system_pods.go:61] "kube-scheduler-ha-912667-m02" [d2ab7cf9-3cd9-4b0b-aec1-26aee5cf3b2a] Running
	I0425 18:52:50.987528   24262 system_pods.go:61] "kube-vip-ha-912667" [bd3267a7-206d-4e47-b154-a7f17a492684] Running
	I0425 18:52:50.987532   24262 system_pods.go:61] "kube-vip-ha-912667-m02" [c0622f7e-0264-4168-b510-7563083cc9d3] Running
	I0425 18:52:50.987536   24262 system_pods.go:61] "storage-provisioner" [f3a0b111-609d-49b3-a056-71eb4b641224] Running
	I0425 18:52:50.987541   24262 system_pods.go:74] duration metric: took 186.787283ms to wait for pod list to return data ...
	I0425 18:52:50.987552   24262 default_sa.go:34] waiting for default service account to be created ...
	I0425 18:52:51.173970   24262 request.go:629] Waited for 186.329986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/default/serviceaccounts
	I0425 18:52:51.174022   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/default/serviceaccounts
	I0425 18:52:51.174027   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:51.174034   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:51.174038   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:51.178033   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:51.178307   24262 default_sa.go:45] found service account: "default"
	I0425 18:52:51.178328   24262 default_sa.go:55] duration metric: took 190.770193ms for default service account to be created ...
	I0425 18:52:51.178340   24262 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 18:52:51.373697   24262 request.go:629] Waited for 195.296743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:52:51.373783   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:52:51.373791   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:51.373798   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:51.373809   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:51.381703   24262 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0425 18:52:51.386543   24262 system_pods.go:86] 17 kube-system pods found
	I0425 18:52:51.386578   24262 system_pods.go:89] "coredns-7db6d8ff4d-22wvx" [56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e] Running
	I0425 18:52:51.386585   24262 system_pods.go:89] "coredns-7db6d8ff4d-h4s2h" [f9e2233c-5350-47ab-bdae-6fa35972b601] Running
	I0425 18:52:51.386591   24262 system_pods.go:89] "etcd-ha-912667" [d18fe5ec-655e-4da4-b8de-782eef846d55] Running
	I0425 18:52:51.386597   24262 system_pods.go:89] "etcd-ha-912667-m02" [8d6782f6-b00b-4d10-8a3a-452460974164] Running
	I0425 18:52:51.386602   24262 system_pods.go:89] "kindnet-sq4lb" [049d5dc9-13ec-4135-8785-229071e57d1a] Running
	I0425 18:52:51.386609   24262 system_pods.go:89] "kindnet-xlvjt" [191ff28e-07d7-459e-afe5-e3d8c23e1016] Running
	I0425 18:52:51.386617   24262 system_pods.go:89] "kube-apiserver-ha-912667" [a8339e9c-d67f-4e84-ba79-754ad86fdf82] Running
	I0425 18:52:51.386624   24262 system_pods.go:89] "kube-apiserver-ha-912667-m02" [a420b2a1-207a-435f-98d2-893836a60e78] Running
	I0425 18:52:51.386634   24262 system_pods.go:89] "kube-controller-manager-ha-912667" [6a91aebd-e142-4165-8acb-cc4c49a5df54] Running
	I0425 18:52:51.386641   24262 system_pods.go:89] "kube-controller-manager-ha-912667-m02" [e94e1a60-af79-4a8e-ac11-e7d36c3d68a3] Running
	I0425 18:52:51.386651   24262 system_pods.go:89] "kube-proxy-mkgv5" [7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a] Running
	I0425 18:52:51.386658   24262 system_pods.go:89] "kube-proxy-rkbcp" [c62d3486-15d6-4398-a397-2f542d8fb074] Running
	I0425 18:52:51.386665   24262 system_pods.go:89] "kube-scheduler-ha-912667" [7dc33762-4bee-467e-9db4-d783ffe04992] Running
	I0425 18:52:51.386674   24262 system_pods.go:89] "kube-scheduler-ha-912667-m02" [d2ab7cf9-3cd9-4b0b-aec1-26aee5cf3b2a] Running
	I0425 18:52:51.386681   24262 system_pods.go:89] "kube-vip-ha-912667" [bd3267a7-206d-4e47-b154-a7f17a492684] Running
	I0425 18:52:51.386688   24262 system_pods.go:89] "kube-vip-ha-912667-m02" [c0622f7e-0264-4168-b510-7563083cc9d3] Running
	I0425 18:52:51.386700   24262 system_pods.go:89] "storage-provisioner" [f3a0b111-609d-49b3-a056-71eb4b641224] Running
	I0425 18:52:51.386712   24262 system_pods.go:126] duration metric: took 208.365447ms to wait for k8s-apps to be running ...
	I0425 18:52:51.386724   24262 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 18:52:51.386781   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:52:51.406830   24262 system_svc.go:56] duration metric: took 20.100576ms WaitForService to wait for kubelet
	I0425 18:52:51.406861   24262 kubeadm.go:576] duration metric: took 17.341866618s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 18:52:51.406885   24262 node_conditions.go:102] verifying NodePressure condition ...
	I0425 18:52:51.574275   24262 request.go:629] Waited for 167.322984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes
	I0425 18:52:51.574416   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes
	I0425 18:52:51.574427   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:51.574434   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:51.574438   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:51.580572   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:52:51.581822   24262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:52:51.581846   24262 node_conditions.go:123] node cpu capacity is 2
	I0425 18:52:51.581856   24262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:52:51.581860   24262 node_conditions.go:123] node cpu capacity is 2
	I0425 18:52:51.581863   24262 node_conditions.go:105] duration metric: took 174.973657ms to run NodePressure ...
	I0425 18:52:51.581873   24262 start.go:240] waiting for startup goroutines ...
	I0425 18:52:51.581917   24262 start.go:254] writing updated cluster config ...
	I0425 18:52:51.583726   24262 out.go:177] 
	I0425 18:52:51.585237   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:52:51.585377   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:52:51.587259   24262 out.go:177] * Starting "ha-912667-m03" control-plane node in "ha-912667" cluster
	I0425 18:52:51.588669   24262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:52:51.588692   24262 cache.go:56] Caching tarball of preloaded images
	I0425 18:52:51.588771   24262 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 18:52:51.588782   24262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 18:52:51.588864   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:52:51.589031   24262 start.go:360] acquireMachinesLock for ha-912667-m03: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 18:52:51.589070   24262 start.go:364] duration metric: took 20.106µs to acquireMachinesLock for "ha-912667-m03"
	I0425 18:52:51.589086   24262 start.go:93] Provisioning new machine with config: &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:52:51.589179   24262 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0425 18:52:51.590680   24262 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0425 18:52:51.590748   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:52:51.590770   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:52:51.606521   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I0425 18:52:51.606916   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:52:51.607406   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:52:51.607425   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:52:51.607725   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:52:51.607913   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetMachineName
	I0425 18:52:51.608081   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:52:51.608263   24262 start.go:159] libmachine.API.Create for "ha-912667" (driver="kvm2")
	I0425 18:52:51.608288   24262 client.go:168] LocalClient.Create starting
	I0425 18:52:51.608316   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 18:52:51.608344   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:52:51.608358   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:52:51.608405   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 18:52:51.608423   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:52:51.608434   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:52:51.608449   24262 main.go:141] libmachine: Running pre-create checks...
	I0425 18:52:51.608456   24262 main.go:141] libmachine: (ha-912667-m03) Calling .PreCreateCheck
	I0425 18:52:51.608618   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetConfigRaw
	I0425 18:52:51.608993   24262 main.go:141] libmachine: Creating machine...
	I0425 18:52:51.609007   24262 main.go:141] libmachine: (ha-912667-m03) Calling .Create
	I0425 18:52:51.609147   24262 main.go:141] libmachine: (ha-912667-m03) Creating KVM machine...
	I0425 18:52:51.610519   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found existing default KVM network
	I0425 18:52:51.610624   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found existing private KVM network mk-ha-912667
	I0425 18:52:51.610779   24262 main.go:141] libmachine: (ha-912667-m03) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03 ...
	I0425 18:52:51.610808   24262 main.go:141] libmachine: (ha-912667-m03) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 18:52:51.610878   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:51.610771   25320 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:52:51.610966   24262 main.go:141] libmachine: (ha-912667-m03) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 18:52:51.822118   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:51.821973   25320 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa...
	I0425 18:52:51.896531   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:51.896417   25320 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/ha-912667-m03.rawdisk...
	I0425 18:52:51.896568   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Writing magic tar header
	I0425 18:52:51.896578   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Writing SSH key tar header
	I0425 18:52:51.896586   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:51.896528   25320 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03 ...
	I0425 18:52:51.896648   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03
	I0425 18:52:51.896667   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 18:52:51.896685   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03 (perms=drwx------)
	I0425 18:52:51.896731   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:52:51.896760   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 18:52:51.896777   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 18:52:51.896795   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 18:52:51.896809   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 18:52:51.896824   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 18:52:51.896838   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 18:52:51.896852   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 18:52:51.896864   24262 main.go:141] libmachine: (ha-912667-m03) Creating domain...
	I0425 18:52:51.896884   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins
	I0425 18:52:51.896896   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home
	I0425 18:52:51.896911   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Skipping /home - not owner
	I0425 18:52:51.897780   24262 main.go:141] libmachine: (ha-912667-m03) define libvirt domain using xml: 
	I0425 18:52:51.897797   24262 main.go:141] libmachine: (ha-912667-m03) <domain type='kvm'>
	I0425 18:52:51.897824   24262 main.go:141] libmachine: (ha-912667-m03)   <name>ha-912667-m03</name>
	I0425 18:52:51.897835   24262 main.go:141] libmachine: (ha-912667-m03)   <memory unit='MiB'>2200</memory>
	I0425 18:52:51.897845   24262 main.go:141] libmachine: (ha-912667-m03)   <vcpu>2</vcpu>
	I0425 18:52:51.897859   24262 main.go:141] libmachine: (ha-912667-m03)   <features>
	I0425 18:52:51.897869   24262 main.go:141] libmachine: (ha-912667-m03)     <acpi/>
	I0425 18:52:51.897881   24262 main.go:141] libmachine: (ha-912667-m03)     <apic/>
	I0425 18:52:51.897892   24262 main.go:141] libmachine: (ha-912667-m03)     <pae/>
	I0425 18:52:51.897902   24262 main.go:141] libmachine: (ha-912667-m03)     
	I0425 18:52:51.897930   24262 main.go:141] libmachine: (ha-912667-m03)   </features>
	I0425 18:52:51.897955   24262 main.go:141] libmachine: (ha-912667-m03)   <cpu mode='host-passthrough'>
	I0425 18:52:51.897964   24262 main.go:141] libmachine: (ha-912667-m03)   
	I0425 18:52:51.897974   24262 main.go:141] libmachine: (ha-912667-m03)   </cpu>
	I0425 18:52:51.897983   24262 main.go:141] libmachine: (ha-912667-m03)   <os>
	I0425 18:52:51.897994   24262 main.go:141] libmachine: (ha-912667-m03)     <type>hvm</type>
	I0425 18:52:51.898004   24262 main.go:141] libmachine: (ha-912667-m03)     <boot dev='cdrom'/>
	I0425 18:52:51.898012   24262 main.go:141] libmachine: (ha-912667-m03)     <boot dev='hd'/>
	I0425 18:52:51.898033   24262 main.go:141] libmachine: (ha-912667-m03)     <bootmenu enable='no'/>
	I0425 18:52:51.898051   24262 main.go:141] libmachine: (ha-912667-m03)   </os>
	I0425 18:52:51.898060   24262 main.go:141] libmachine: (ha-912667-m03)   <devices>
	I0425 18:52:51.898070   24262 main.go:141] libmachine: (ha-912667-m03)     <disk type='file' device='cdrom'>
	I0425 18:52:51.898091   24262 main.go:141] libmachine: (ha-912667-m03)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/boot2docker.iso'/>
	I0425 18:52:51.898110   24262 main.go:141] libmachine: (ha-912667-m03)       <target dev='hdc' bus='scsi'/>
	I0425 18:52:51.898123   24262 main.go:141] libmachine: (ha-912667-m03)       <readonly/>
	I0425 18:52:51.898133   24262 main.go:141] libmachine: (ha-912667-m03)     </disk>
	I0425 18:52:51.898144   24262 main.go:141] libmachine: (ha-912667-m03)     <disk type='file' device='disk'>
	I0425 18:52:51.898158   24262 main.go:141] libmachine: (ha-912667-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 18:52:51.898175   24262 main.go:141] libmachine: (ha-912667-m03)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/ha-912667-m03.rawdisk'/>
	I0425 18:52:51.898191   24262 main.go:141] libmachine: (ha-912667-m03)       <target dev='hda' bus='virtio'/>
	I0425 18:52:51.898218   24262 main.go:141] libmachine: (ha-912667-m03)     </disk>
	I0425 18:52:51.898235   24262 main.go:141] libmachine: (ha-912667-m03)     <interface type='network'>
	I0425 18:52:51.898248   24262 main.go:141] libmachine: (ha-912667-m03)       <source network='mk-ha-912667'/>
	I0425 18:52:51.898257   24262 main.go:141] libmachine: (ha-912667-m03)       <model type='virtio'/>
	I0425 18:52:51.898268   24262 main.go:141] libmachine: (ha-912667-m03)     </interface>
	I0425 18:52:51.898280   24262 main.go:141] libmachine: (ha-912667-m03)     <interface type='network'>
	I0425 18:52:51.898293   24262 main.go:141] libmachine: (ha-912667-m03)       <source network='default'/>
	I0425 18:52:51.898309   24262 main.go:141] libmachine: (ha-912667-m03)       <model type='virtio'/>
	I0425 18:52:51.898322   24262 main.go:141] libmachine: (ha-912667-m03)     </interface>
	I0425 18:52:51.898335   24262 main.go:141] libmachine: (ha-912667-m03)     <serial type='pty'>
	I0425 18:52:51.898345   24262 main.go:141] libmachine: (ha-912667-m03)       <target port='0'/>
	I0425 18:52:51.898355   24262 main.go:141] libmachine: (ha-912667-m03)     </serial>
	I0425 18:52:51.898366   24262 main.go:141] libmachine: (ha-912667-m03)     <console type='pty'>
	I0425 18:52:51.898379   24262 main.go:141] libmachine: (ha-912667-m03)       <target type='serial' port='0'/>
	I0425 18:52:51.898395   24262 main.go:141] libmachine: (ha-912667-m03)     </console>
	I0425 18:52:51.898408   24262 main.go:141] libmachine: (ha-912667-m03)     <rng model='virtio'>
	I0425 18:52:51.898420   24262 main.go:141] libmachine: (ha-912667-m03)       <backend model='random'>/dev/random</backend>
	I0425 18:52:51.898441   24262 main.go:141] libmachine: (ha-912667-m03)     </rng>
	I0425 18:52:51.898451   24262 main.go:141] libmachine: (ha-912667-m03)     
	I0425 18:52:51.898470   24262 main.go:141] libmachine: (ha-912667-m03)     
	I0425 18:52:51.898485   24262 main.go:141] libmachine: (ha-912667-m03)   </devices>
	I0425 18:52:51.898505   24262 main.go:141] libmachine: (ha-912667-m03) </domain>
	I0425 18:52:51.898513   24262 main.go:141] libmachine: (ha-912667-m03) 
	I0425 18:52:51.905868   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:3b:cf:2f in network default
	I0425 18:52:51.906430   24262 main.go:141] libmachine: (ha-912667-m03) Ensuring networks are active...
	I0425 18:52:51.906453   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:51.907148   24262 main.go:141] libmachine: (ha-912667-m03) Ensuring network default is active
	I0425 18:52:51.907470   24262 main.go:141] libmachine: (ha-912667-m03) Ensuring network mk-ha-912667 is active
	I0425 18:52:51.907860   24262 main.go:141] libmachine: (ha-912667-m03) Getting domain xml...
	I0425 18:52:51.908577   24262 main.go:141] libmachine: (ha-912667-m03) Creating domain...
	I0425 18:52:53.145546   24262 main.go:141] libmachine: (ha-912667-m03) Waiting to get IP...
	I0425 18:52:53.146295   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:53.146782   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:53.146852   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:53.146756   25320 retry.go:31] will retry after 297.992589ms: waiting for machine to come up
	I0425 18:52:53.446254   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:53.446741   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:53.446772   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:53.446683   25320 retry.go:31] will retry after 302.55332ms: waiting for machine to come up
	I0425 18:52:53.751324   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:53.751803   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:53.751867   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:53.751787   25320 retry.go:31] will retry after 388.619505ms: waiting for machine to come up
	I0425 18:52:54.142472   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:54.142904   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:54.142935   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:54.142855   25320 retry.go:31] will retry after 528.59084ms: waiting for machine to come up
	I0425 18:52:54.672507   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:54.672913   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:54.672941   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:54.672856   25320 retry.go:31] will retry after 623.458204ms: waiting for machine to come up
	I0425 18:52:55.297404   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:55.297882   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:55.297910   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:55.297833   25320 retry.go:31] will retry after 648.625535ms: waiting for machine to come up
	I0425 18:52:55.947623   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:55.947996   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:55.948044   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:55.947970   25320 retry.go:31] will retry after 822.516643ms: waiting for machine to come up
	I0425 18:52:56.772413   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:56.773032   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:56.773057   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:56.772987   25320 retry.go:31] will retry after 1.336973204s: waiting for machine to come up
	I0425 18:52:58.111359   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:58.111843   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:58.111870   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:58.111771   25320 retry.go:31] will retry after 1.545344182s: waiting for machine to come up
	I0425 18:52:59.659246   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:59.659703   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:59.659728   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:59.659658   25320 retry.go:31] will retry after 1.880100949s: waiting for machine to come up
	I0425 18:53:01.541261   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:01.541770   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:53:01.541808   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:53:01.541669   25320 retry.go:31] will retry after 1.940972079s: waiting for machine to come up
	I0425 18:53:03.484587   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:03.485121   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:53:03.485151   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:53:03.485093   25320 retry.go:31] will retry after 2.734995729s: waiting for machine to come up
	I0425 18:53:06.222893   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:06.223400   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:53:06.223433   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:53:06.223350   25320 retry.go:31] will retry after 4.10929529s: waiting for machine to come up
	I0425 18:53:10.335229   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:10.335604   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:53:10.335632   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:53:10.335551   25320 retry.go:31] will retry after 4.681170749s: waiting for machine to come up
	I0425 18:53:15.019237   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.019716   24262 main.go:141] libmachine: (ha-912667-m03) Found IP for machine: 192.168.39.179
	I0425 18:53:15.019739   24262 main.go:141] libmachine: (ha-912667-m03) Reserving static IP address...
	I0425 18:53:15.019750   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has current primary IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.020085   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find host DHCP lease matching {name: "ha-912667-m03", mac: "52:54:00:fb:3e:7a", ip: "192.168.39.179"} in network mk-ha-912667
	I0425 18:53:15.092151   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Getting to WaitForSSH function...
	I0425 18:53:15.092176   24262 main.go:141] libmachine: (ha-912667-m03) Reserved static IP address: 192.168.39.179
	I0425 18:53:15.092225   24262 main.go:141] libmachine: (ha-912667-m03) Waiting for SSH to be available...
	I0425 18:53:15.095142   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.095685   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.095720   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.095980   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Using SSH client type: external
	I0425 18:53:15.096018   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa (-rw-------)
	I0425 18:53:15.096054   24262 main.go:141] libmachine: (ha-912667-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 18:53:15.096069   24262 main.go:141] libmachine: (ha-912667-m03) DBG | About to run SSH command:
	I0425 18:53:15.096084   24262 main.go:141] libmachine: (ha-912667-m03) DBG | exit 0
	I0425 18:53:15.226589   24262 main.go:141] libmachine: (ha-912667-m03) DBG | SSH cmd err, output: <nil>: 
	I0425 18:53:15.226836   24262 main.go:141] libmachine: (ha-912667-m03) KVM machine creation complete!
	I0425 18:53:15.227213   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetConfigRaw
	I0425 18:53:15.227696   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:15.227896   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:15.228064   24262 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 18:53:15.228078   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetState
	I0425 18:53:15.229352   24262 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 18:53:15.229368   24262 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 18:53:15.229375   24262 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 18:53:15.229381   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.232456   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.232927   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.232954   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.233279   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:15.233445   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.233615   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.233819   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:15.233997   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:15.234277   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:15.234295   24262 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 18:53:15.346168   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:53:15.346194   24262 main.go:141] libmachine: Detecting the provisioner...
	I0425 18:53:15.346215   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.348956   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.349351   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.349380   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.349544   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:15.349726   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.349884   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.349995   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:15.350132   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:15.350358   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:15.350370   24262 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 18:53:15.463810   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 18:53:15.463889   24262 main.go:141] libmachine: found compatible host: buildroot
	I0425 18:53:15.463903   24262 main.go:141] libmachine: Provisioning with buildroot...
	I0425 18:53:15.463913   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetMachineName
	I0425 18:53:15.464193   24262 buildroot.go:166] provisioning hostname "ha-912667-m03"
	I0425 18:53:15.464223   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetMachineName
	I0425 18:53:15.464412   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.466951   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.467302   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.467328   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.467545   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:15.467700   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.467854   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.468013   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:15.468331   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:15.468515   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:15.468536   24262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-912667-m03 && echo "ha-912667-m03" | sudo tee /etc/hostname
	I0425 18:53:15.599786   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-912667-m03
	
	I0425 18:53:15.599819   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.602507   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.602891   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.602921   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.603114   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:15.603337   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.603497   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.603671   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:15.603868   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:15.604024   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:15.604040   24262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-912667-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-912667-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-912667-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 18:53:15.730061   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:53:15.730093   24262 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 18:53:15.730116   24262 buildroot.go:174] setting up certificates
	I0425 18:53:15.730126   24262 provision.go:84] configureAuth start
	I0425 18:53:15.730134   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetMachineName
	I0425 18:53:15.730420   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:53:15.733016   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.733412   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.733442   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.733549   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.735702   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.736039   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.736066   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.736212   24262 provision.go:143] copyHostCerts
	I0425 18:53:15.736246   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:53:15.736285   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 18:53:15.736295   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:53:15.736390   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 18:53:15.736495   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:53:15.736522   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 18:53:15.736532   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:53:15.736571   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 18:53:15.736639   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:53:15.736665   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 18:53:15.736674   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:53:15.736704   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 18:53:15.736785   24262 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.ha-912667-m03 san=[127.0.0.1 192.168.39.179 ha-912667-m03 localhost minikube]
	I0425 18:53:15.922828   24262 provision.go:177] copyRemoteCerts
	I0425 18:53:15.922899   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 18:53:15.922930   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.925985   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.926326   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.926354   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.926562   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:15.926761   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.926909   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:15.927047   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:53:16.015167   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 18:53:16.015242   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 18:53:16.044577   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 18:53:16.044645   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0425 18:53:16.070841   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 18:53:16.070920   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 18:53:16.097754   24262 provision.go:87] duration metric: took 367.611542ms to configureAuth
	I0425 18:53:16.097784   24262 buildroot.go:189] setting minikube options for container-runtime
	I0425 18:53:16.098040   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:53:16.098130   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:16.100878   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.101337   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.101367   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.101525   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:16.101731   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.101938   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.102080   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:16.102283   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:16.102481   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:16.102504   24262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 18:53:16.402337   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 18:53:16.402367   24262 main.go:141] libmachine: Checking connection to Docker...
	I0425 18:53:16.402377   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetURL
	I0425 18:53:16.403540   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Using libvirt version 6000000
	I0425 18:53:16.405969   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.406391   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.406425   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.406597   24262 main.go:141] libmachine: Docker is up and running!
	I0425 18:53:16.406613   24262 main.go:141] libmachine: Reticulating splines...
	I0425 18:53:16.406621   24262 client.go:171] duration metric: took 24.798324995s to LocalClient.Create
	I0425 18:53:16.406648   24262 start.go:167] duration metric: took 24.798385221s to libmachine.API.Create "ha-912667"
	I0425 18:53:16.406659   24262 start.go:293] postStartSetup for "ha-912667-m03" (driver="kvm2")
	I0425 18:53:16.406671   24262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 18:53:16.406693   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:16.406934   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 18:53:16.406962   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:16.409598   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.410161   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.410193   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.410382   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:16.410568   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.410744   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:16.410892   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:53:16.503631   24262 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 18:53:16.508930   24262 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 18:53:16.508950   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 18:53:16.509032   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 18:53:16.509115   24262 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 18:53:16.509124   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 18:53:16.509215   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 18:53:16.520806   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:53:16.549619   24262 start.go:296] duration metric: took 142.947257ms for postStartSetup
	I0425 18:53:16.549668   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetConfigRaw
	I0425 18:53:16.550310   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:53:16.552882   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.553328   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.553356   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.553596   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:53:16.553787   24262 start.go:128] duration metric: took 24.964599205s to createHost
	I0425 18:53:16.553811   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:16.556093   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.556461   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.556490   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.556589   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:16.556775   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.556963   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.557130   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:16.557327   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:16.557538   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:16.557556   24262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 18:53:16.672263   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714071196.642856982
	
	I0425 18:53:16.672288   24262 fix.go:216] guest clock: 1714071196.642856982
	I0425 18:53:16.672298   24262 fix.go:229] Guest: 2024-04-25 18:53:16.642856982 +0000 UTC Remote: 2024-04-25 18:53:16.553800383 +0000 UTC m=+221.129214256 (delta=89.056599ms)
	I0425 18:53:16.672333   24262 fix.go:200] guest clock delta is within tolerance: 89.056599ms
	I0425 18:53:16.672338   24262 start.go:83] releasing machines lock for "ha-912667-m03", held for 25.083259716s
	I0425 18:53:16.672356   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:16.672655   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:53:16.675500   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.676078   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.676140   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.678596   24262 out.go:177] * Found network options:
	I0425 18:53:16.679994   24262 out.go:177]   - NO_PROXY=192.168.39.189,192.168.39.66
	W0425 18:53:16.681519   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	W0425 18:53:16.681544   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 18:53:16.681558   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:16.682180   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:16.682411   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:16.682520   24262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 18:53:16.682558   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	W0425 18:53:16.682649   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	W0425 18:53:16.682682   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 18:53:16.682779   24262 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 18:53:16.682803   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:16.685470   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.685546   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.685935   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.685961   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.685990   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.686004   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.686233   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:16.686312   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:16.686458   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.686477   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.686623   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:16.686669   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:16.686746   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:53:16.686847   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:53:16.934872   24262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 18:53:16.941872   24262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 18:53:16.941929   24262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 18:53:16.962537   24262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 18:53:16.962558   24262 start.go:494] detecting cgroup driver to use...
	I0425 18:53:16.962615   24262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 18:53:16.980186   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 18:53:16.997938   24262 docker.go:217] disabling cri-docker service (if available) ...
	I0425 18:53:16.997995   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 18:53:17.013248   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 18:53:17.029156   24262 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 18:53:17.148560   24262 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 18:53:17.303796   24262 docker.go:233] disabling docker service ...
	I0425 18:53:17.303879   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 18:53:17.321798   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 18:53:17.336439   24262 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 18:53:17.488152   24262 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 18:53:17.626994   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 18:53:17.642591   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 18:53:17.662872   24262 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 18:53:17.662948   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.674109   24262 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 18:53:17.674160   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.685617   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.698662   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.710613   24262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 18:53:17.722305   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.733467   24262 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.752260   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.764484   24262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 18:53:17.776224   24262 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 18:53:17.776297   24262 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 18:53:17.791800   24262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 18:53:17.803882   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:53:17.936848   24262 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 18:53:18.107505   24262 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 18:53:18.107580   24262 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 18:53:18.113331   24262 start.go:562] Will wait 60s for crictl version
	I0425 18:53:18.113379   24262 ssh_runner.go:195] Run: which crictl
	I0425 18:53:18.118070   24262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 18:53:18.158674   24262 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 18:53:18.158758   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:53:18.192445   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:53:18.235932   24262 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 18:53:18.237318   24262 out.go:177]   - env NO_PROXY=192.168.39.189
	I0425 18:53:18.238717   24262 out.go:177]   - env NO_PROXY=192.168.39.189,192.168.39.66
	I0425 18:53:18.240178   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:53:18.242594   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:18.242972   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:18.242994   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:18.243230   24262 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 18:53:18.248298   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:53:18.264425   24262 mustload.go:65] Loading cluster: ha-912667
	I0425 18:53:18.264708   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:53:18.265051   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:53:18.265100   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:53:18.281459   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
	I0425 18:53:18.281926   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:53:18.282451   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:53:18.282475   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:53:18.282795   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:53:18.282986   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:53:18.284711   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:53:18.284990   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:53:18.285025   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:53:18.299787   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46883
	I0425 18:53:18.300198   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:53:18.300683   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:53:18.300707   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:53:18.301018   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:53:18.301240   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:53:18.301427   24262 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667 for IP: 192.168.39.179
	I0425 18:53:18.301438   24262 certs.go:194] generating shared ca certs ...
	I0425 18:53:18.301452   24262 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:53:18.301608   24262 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 18:53:18.301661   24262 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 18:53:18.301673   24262 certs.go:256] generating profile certs ...
	I0425 18:53:18.301765   24262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key
	I0425 18:53:18.301798   24262 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.8f7228b6
	I0425 18:53:18.301821   24262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.8f7228b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189 192.168.39.66 192.168.39.179 192.168.39.254]
	I0425 18:53:18.432850   24262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.8f7228b6 ...
	I0425 18:53:18.432878   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.8f7228b6: {Name:mk6e41bd710998fe356ce65f93113c2167092d8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:53:18.433039   24262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.8f7228b6 ...
	I0425 18:53:18.433051   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.8f7228b6: {Name:mkf31c6c2f1c1bc77655aa623ce0d079f6c7a498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:53:18.433119   24262 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.8f7228b6 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt
	I0425 18:53:18.433240   24262 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.8f7228b6 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key
	I0425 18:53:18.433358   24262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key
	I0425 18:53:18.433373   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 18:53:18.433386   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 18:53:18.433399   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 18:53:18.433412   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 18:53:18.433424   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 18:53:18.433436   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 18:53:18.433449   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 18:53:18.433461   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 18:53:18.433515   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 18:53:18.433548   24262 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 18:53:18.433555   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 18:53:18.433576   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 18:53:18.433598   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 18:53:18.433618   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 18:53:18.433656   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:53:18.433726   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 18:53:18.433741   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:53:18.433750   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 18:53:18.433777   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:53:18.436934   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:53:18.437353   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:53:18.437398   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:53:18.437609   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:53:18.437787   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:53:18.437921   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:53:18.438039   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:53:18.514578   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0425 18:53:18.520594   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0425 18:53:18.534986   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0425 18:53:18.540597   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0425 18:53:18.554830   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0425 18:53:18.560363   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0425 18:53:18.574403   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0425 18:53:18.579401   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0425 18:53:18.592339   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0425 18:53:18.597297   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0425 18:53:18.609908   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0425 18:53:18.614992   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0425 18:53:18.629538   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 18:53:18.659495   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 18:53:18.688248   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 18:53:18.716123   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 18:53:18.745411   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0425 18:53:18.774655   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 18:53:18.803856   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 18:53:18.834607   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0425 18:53:18.864115   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 18:53:18.893731   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 18:53:18.923651   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 18:53:18.951795   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0425 18:53:18.971502   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0425 18:53:18.990777   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0425 18:53:19.009285   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0425 18:53:19.027525   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0425 18:53:19.047213   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0425 18:53:19.065355   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0425 18:53:19.083746   24262 ssh_runner.go:195] Run: openssl version
	I0425 18:53:19.090003   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 18:53:19.104003   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:53:19.109596   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:53:19.109652   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:53:19.116334   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 18:53:19.128996   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 18:53:19.142687   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 18:53:19.148332   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 18:53:19.148395   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 18:53:19.155004   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 18:53:19.167760   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 18:53:19.180460   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 18:53:19.186119   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 18:53:19.186181   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 18:53:19.192673   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 18:53:19.204764   24262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 18:53:19.209519   24262 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 18:53:19.209577   24262 kubeadm.go:928] updating node {m03 192.168.39.179 8443 v1.30.0 crio true true} ...
	I0425 18:53:19.209668   24262 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-912667-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 18:53:19.209696   24262 kube-vip.go:111] generating kube-vip config ...
	I0425 18:53:19.209738   24262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0425 18:53:19.229688   24262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0425 18:53:19.229755   24262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0425 18:53:19.229808   24262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 18:53:19.240912   24262 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0425 18:53:19.240967   24262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0425 18:53:19.251672   24262 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0425 18:53:19.251686   24262 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0425 18:53:19.251693   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0425 18:53:19.251700   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0425 18:53:19.251747   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0425 18:53:19.251750   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0425 18:53:19.251685   24262 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0425 18:53:19.251802   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:53:19.270889   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0425 18:53:19.270937   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0425 18:53:19.270960   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0425 18:53:19.270967   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0425 18:53:19.270997   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0425 18:53:19.271051   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0425 18:53:19.310844   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0425 18:53:19.310882   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0425 18:53:20.311066   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0425 18:53:20.323330   24262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0425 18:53:20.345409   24262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 18:53:20.366291   24262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0425 18:53:20.387008   24262 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0425 18:53:20.391355   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:53:20.407468   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:53:20.560904   24262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:53:20.581539   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:53:20.582032   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:53:20.582079   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:53:20.597302   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0425 18:53:20.598195   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:53:20.598694   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:53:20.598723   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:53:20.599086   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:53:20.599259   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:53:20.599395   24262 start.go:316] joinCluster: &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:53:20.599557   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0425 18:53:20.599580   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:53:20.602619   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:53:20.603063   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:53:20.603090   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:53:20.603207   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:53:20.603340   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:53:20.603526   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:53:20.603656   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:53:20.779660   24262 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:53:20.779707   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e104gh.sh6getxhhtdg6ymu --discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-912667-m03 --control-plane --apiserver-advertise-address=192.168.39.179 --apiserver-bind-port=8443"
	I0425 18:53:46.324259   24262 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e104gh.sh6getxhhtdg6ymu --discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-912667-m03 --control-plane --apiserver-advertise-address=192.168.39.179 --apiserver-bind-port=8443": (25.5445293s)
	I0425 18:53:46.324294   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0425 18:53:46.971782   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-912667-m03 minikube.k8s.io/updated_at=2024_04_25T18_53_46_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=ha-912667 minikube.k8s.io/primary=false
	I0425 18:53:47.102167   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-912667-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0425 18:53:47.240730   24262 start.go:318] duration metric: took 26.641328067s to joinCluster
	I0425 18:53:47.240864   24262 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:53:47.242322   24262 out.go:177] * Verifying Kubernetes components...
	I0425 18:53:47.241205   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:53:47.243591   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:53:47.541877   24262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:53:47.585988   24262 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:53:47.586359   24262 kapi.go:59] client config for ha-912667: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt", KeyFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key", CAFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0425 18:53:47.586443   24262 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.189:8443
	I0425 18:53:47.586700   24262 node_ready.go:35] waiting up to 6m0s for node "ha-912667-m03" to be "Ready" ...
	I0425 18:53:47.586845   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:47.586860   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:47.586870   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:47.586877   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:47.590835   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:48.087327   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:48.087356   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:48.087374   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:48.087379   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:48.092821   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:53:48.587267   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:48.587294   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:48.587305   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:48.587312   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:48.635333   24262 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I0425 18:53:49.087496   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:49.087523   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:49.087536   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:49.087545   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:49.091777   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:49.587190   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:49.587218   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:49.587228   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:49.587235   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:49.590725   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:49.591485   24262 node_ready.go:53] node "ha-912667-m03" has status "Ready":"False"
	I0425 18:53:50.087741   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:50.087762   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:50.087769   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:50.087774   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:50.092367   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:50.587385   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:50.587410   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:50.587420   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:50.587426   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:50.591571   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:51.087336   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:51.087358   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:51.087365   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:51.087370   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:51.091431   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:51.587477   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:51.587501   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:51.587509   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:51.587513   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:51.591781   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:51.592464   24262 node_ready.go:53] node "ha-912667-m03" has status "Ready":"False"
	I0425 18:53:52.087079   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:52.087104   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:52.087114   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:52.087126   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:52.091475   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:52.587954   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:52.587984   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:52.587997   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:52.588003   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:52.592216   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:53.086916   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:53.086943   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:53.086955   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:53.086960   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:53.091541   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:53.587419   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:53.587441   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:53.587450   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:53.587454   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:53.591776   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:54.087492   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:54.087521   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:54.087532   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:54.087538   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:54.093770   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:53:54.094256   24262 node_ready.go:53] node "ha-912667-m03" has status "Ready":"False"
	I0425 18:53:54.587146   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:54.587174   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:54.587182   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:54.587186   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:54.591607   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:55.087514   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:55.087542   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.087554   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.087560   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.106191   24262 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0425 18:53:55.107030   24262 node_ready.go:49] node "ha-912667-m03" has status "Ready":"True"
	I0425 18:53:55.107059   24262 node_ready.go:38] duration metric: took 7.520333617s for node "ha-912667-m03" to be "Ready" ...
	I0425 18:53:55.107070   24262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:53:55.107148   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:53:55.107163   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.107173   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.107179   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.134362   24262 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0425 18:53:55.140632   24262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.140724   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-22wvx
	I0425 18:53:55.140739   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.140750   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.140756   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.150957   24262 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0425 18:53:55.151573   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:55.151593   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.151604   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.151610   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.154891   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:55.155456   24262 pod_ready.go:92] pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.155478   24262 pod_ready.go:81] duration metric: took 14.817716ms for pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.155490   24262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.155558   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-h4s2h
	I0425 18:53:55.155569   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.155578   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.155582   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.158241   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:53:55.159287   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:55.159305   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.159315   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.159320   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.161876   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:53:55.162467   24262 pod_ready.go:92] pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.162486   24262 pod_ready.go:81] duration metric: took 6.988369ms for pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.162499   24262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.162565   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667
	I0425 18:53:55.162575   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.162585   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.162594   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.166084   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:55.167057   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:55.167070   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.167076   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.167081   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.171470   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:55.172158   24262 pod_ready.go:92] pod "etcd-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.172181   24262 pod_ready.go:81] duration metric: took 9.671098ms for pod "etcd-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.172193   24262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.172259   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m02
	I0425 18:53:55.172272   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.172281   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.172286   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.176266   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:55.177785   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:53:55.177801   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.177810   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.177813   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.180264   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:53:55.180897   24262 pod_ready.go:92] pod "etcd-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.180912   24262 pod_ready.go:81] duration metric: took 8.711147ms for pod "etcd-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.180924   24262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.288235   24262 request.go:629] Waited for 107.243045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m03
	I0425 18:53:55.288330   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m03
	I0425 18:53:55.288338   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.288349   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.288355   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.294122   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:53:55.488365   24262 request.go:629] Waited for 193.451029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:55.488418   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:55.488424   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.488430   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.488433   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.493693   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:53:55.494314   24262 pod_ready.go:92] pod "etcd-ha-912667-m03" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.494339   24262 pod_ready.go:81] duration metric: took 313.407013ms for pod "etcd-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.494367   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.688199   24262 request.go:629] Waited for 193.737053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667
	I0425 18:53:55.688262   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667
	I0425 18:53:55.688268   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.688275   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.688280   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.692067   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:55.888501   24262 request.go:629] Waited for 195.38776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:55.888560   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:55.888567   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.888590   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.888599   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.892153   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:55.892950   24262 pod_ready.go:92] pod "kube-apiserver-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.892969   24262 pod_ready.go:81] duration metric: took 398.590637ms for pod "kube-apiserver-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.892978   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:56.088062   24262 request.go:629] Waited for 195.015479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:53:56.088131   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:53:56.088137   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:56.088147   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:56.088155   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:56.093110   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:56.287650   24262 request.go:629] Waited for 193.321791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:53:56.287747   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:53:56.287765   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:56.287776   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:56.287782   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:56.293910   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:53:56.294517   24262 pod_ready.go:92] pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:56.294535   24262 pod_ready.go:81] duration metric: took 401.549867ms for pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:56.294544   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:56.487547   24262 request.go:629] Waited for 192.942824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:56.487612   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:56.487617   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:56.487625   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:56.487629   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:56.491542   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:56.687978   24262 request.go:629] Waited for 195.305945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:56.688082   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:56.688090   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:56.688105   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:56.688116   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:56.692382   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:56.888580   24262 request.go:629] Waited for 93.275877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:56.888650   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:56.888658   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:56.888669   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:56.888673   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:56.893577   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:57.087676   24262 request.go:629] Waited for 193.27677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.087745   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.087756   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:57.087776   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:57.087799   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:57.091355   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:57.295147   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:57.295170   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:57.295177   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:57.295181   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:57.299173   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:57.488322   24262 request.go:629] Waited for 188.346006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.488413   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.488422   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:57.488434   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:57.488441   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:57.493000   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:57.794794   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:57.794819   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:57.794827   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:57.794830   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:57.798277   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:57.888510   24262 request.go:629] Waited for 89.282261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.888563   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.888570   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:57.888580   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:57.888586   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:57.892567   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:58.294683   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:58.294702   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:58.294710   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:58.294714   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:58.298942   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:58.299686   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:58.299702   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:58.299709   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:58.299713   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:58.303622   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:58.304600   24262 pod_ready.go:102] pod "kube-apiserver-ha-912667-m03" in "kube-system" namespace has status "Ready":"False"
	I0425 18:53:58.795718   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:58.795745   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:58.795756   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:58.795760   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:58.800977   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:53:58.801943   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:58.801967   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:58.801978   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:58.801985   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:58.806113   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:59.295432   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:59.295462   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.295470   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.295475   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.299284   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.300323   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:59.300340   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.300347   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.300352   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.304215   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.304915   24262 pod_ready.go:92] pod "kube-apiserver-ha-912667-m03" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:59.304935   24262 pod_ready.go:81] duration metric: took 3.010384418s for pod "kube-apiserver-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:59.304949   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:59.305011   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667
	I0425 18:53:59.305022   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.305032   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.305038   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.308834   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.487798   24262 request.go:629] Waited for 178.313383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:59.487865   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:59.487873   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.487883   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.487892   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.491597   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.492224   24262 pod_ready.go:92] pod "kube-controller-manager-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:59.492251   24262 pod_ready.go:81] duration metric: took 187.292003ms for pod "kube-controller-manager-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:59.492266   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:59.688448   24262 request.go:629] Waited for 196.118207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:53:59.688514   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:53:59.688522   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.688542   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.688569   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.692519   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.888234   24262 request.go:629] Waited for 195.027515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:53:59.888315   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:53:59.888324   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.888331   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.888344   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.892038   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.892717   24262 pod_ready.go:92] pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:59.892734   24262 pod_ready.go:81] duration metric: took 400.460928ms for pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:59.892744   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:00.087909   24262 request.go:629] Waited for 195.107362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m03
	I0425 18:54:00.087990   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m03
	I0425 18:54:00.088001   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:00.088009   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:00.088013   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:00.092611   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:00.287913   24262 request.go:629] Waited for 194.380558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:00.287995   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:00.288004   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:00.288014   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:00.288024   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:00.291982   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:00.292916   24262 pod_ready.go:92] pod "kube-controller-manager-ha-912667-m03" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:00.292936   24262 pod_ready.go:81] duration metric: took 400.186731ms for pod "kube-controller-manager-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:00.292947   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9zxln" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:00.488111   24262 request.go:629] Waited for 195.107324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zxln
	I0425 18:54:00.488186   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zxln
	I0425 18:54:00.488192   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:00.488200   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:00.488214   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:00.492219   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:00.688120   24262 request.go:629] Waited for 194.770439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:00.688183   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:00.688190   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:00.688198   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:00.688203   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:00.691756   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:00.692449   24262 pod_ready.go:92] pod "kube-proxy-9zxln" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:00.692469   24262 pod_ready.go:81] duration metric: took 399.51603ms for pod "kube-proxy-9zxln" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:00.692483   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mkgv5" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:00.888474   24262 request.go:629] Waited for 195.922903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkgv5
	I0425 18:54:00.888569   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkgv5
	I0425 18:54:00.888581   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:00.888589   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:00.888593   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:00.893765   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:54:01.088008   24262 request.go:629] Waited for 193.382615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:54:01.088070   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:54:01.088077   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:01.088088   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:01.088094   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:01.092407   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:01.093369   24262 pod_ready.go:92] pod "kube-proxy-mkgv5" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:01.093394   24262 pod_ready.go:81] duration metric: took 400.90273ms for pod "kube-proxy-mkgv5" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:01.093408   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rkbcp" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:01.288497   24262 request.go:629] Waited for 195.011294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rkbcp
	I0425 18:54:01.288592   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rkbcp
	I0425 18:54:01.288601   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:01.288609   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:01.288613   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:01.292744   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:01.487659   24262 request.go:629] Waited for 194.314073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:54:01.487736   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:54:01.487742   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:01.487750   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:01.487755   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:01.492230   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:01.493257   24262 pod_ready.go:92] pod "kube-proxy-rkbcp" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:01.493287   24262 pod_ready.go:81] duration metric: took 399.871904ms for pod "kube-proxy-rkbcp" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:01.493300   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:01.687824   24262 request.go:629] Waited for 194.379121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667
	I0425 18:54:01.687892   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667
	I0425 18:54:01.687900   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:01.687912   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:01.687919   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:01.691711   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:01.887933   24262 request.go:629] Waited for 195.363443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:54:01.888029   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:54:01.888042   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:01.888053   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:01.888059   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:01.892043   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:01.892973   24262 pod_ready.go:92] pod "kube-scheduler-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:01.892997   24262 pod_ready.go:81] duration metric: took 399.688109ms for pod "kube-scheduler-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:01.893010   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:02.088084   24262 request.go:629] Waited for 194.983596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m02
	I0425 18:54:02.088148   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m02
	I0425 18:54:02.088156   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.088164   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.088172   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.092045   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:02.288459   24262 request.go:629] Waited for 195.383107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:54:02.288515   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:54:02.288521   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.288529   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.288534   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.293069   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:02.294069   24262 pod_ready.go:92] pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:02.294086   24262 pod_ready.go:81] duration metric: took 401.060695ms for pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:02.294095   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:02.488379   24262 request.go:629] Waited for 194.220272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m03
	I0425 18:54:02.488491   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m03
	I0425 18:54:02.488515   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.488529   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.488550   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.493344   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:02.687823   24262 request.go:629] Waited for 193.364395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:02.687923   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:02.687935   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.687946   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.687957   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.691918   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:02.692539   24262 pod_ready.go:92] pod "kube-scheduler-ha-912667-m03" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:02.692564   24262 pod_ready.go:81] duration metric: took 398.460848ms for pod "kube-scheduler-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:02.692578   24262 pod_ready.go:38] duration metric: took 7.585495691s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:54:02.692595   24262 api_server.go:52] waiting for apiserver process to appear ...
	I0425 18:54:02.692656   24262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:54:02.708782   24262 api_server.go:72] duration metric: took 15.467874327s to wait for apiserver process to appear ...
	I0425 18:54:02.708812   24262 api_server.go:88] waiting for apiserver healthz status ...
	I0425 18:54:02.708837   24262 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I0425 18:54:02.713298   24262 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I0425 18:54:02.713374   24262 round_trippers.go:463] GET https://192.168.39.189:8443/version
	I0425 18:54:02.713385   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.713398   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.713408   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.714582   24262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 18:54:02.714713   24262 api_server.go:141] control plane version: v1.30.0
	I0425 18:54:02.714730   24262 api_server.go:131] duration metric: took 5.911686ms to wait for apiserver health ...
	I0425 18:54:02.714736   24262 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 18:54:02.888023   24262 request.go:629] Waited for 173.221604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:54:02.888107   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:54:02.888118   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.888140   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.888166   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.898312   24262 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0425 18:54:02.907148   24262 system_pods.go:59] 24 kube-system pods found
	I0425 18:54:02.907177   24262 system_pods.go:61] "coredns-7db6d8ff4d-22wvx" [56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e] Running
	I0425 18:54:02.907182   24262 system_pods.go:61] "coredns-7db6d8ff4d-h4s2h" [f9e2233c-5350-47ab-bdae-6fa35972b601] Running
	I0425 18:54:02.907186   24262 system_pods.go:61] "etcd-ha-912667" [d18fe5ec-655e-4da4-b8de-782eef846d55] Running
	I0425 18:54:02.907189   24262 system_pods.go:61] "etcd-ha-912667-m02" [8d6782f6-b00b-4d10-8a3a-452460974164] Running
	I0425 18:54:02.907192   24262 system_pods.go:61] "etcd-ha-912667-m03" [24ac9b8b-9f01-4edb-b82d-8bca7df1a74f] Running
	I0425 18:54:02.907196   24262 system_pods.go:61] "kindnet-gcbv6" [03aab1af-e03a-4ff7-bb92-6d22c1dd8d2a] Running
	I0425 18:54:02.907200   24262 system_pods.go:61] "kindnet-sq4lb" [049d5dc9-13ec-4135-8785-229071e57d1a] Running
	I0425 18:54:02.907203   24262 system_pods.go:61] "kindnet-xlvjt" [191ff28e-07d7-459e-afe5-e3d8c23e1016] Running
	I0425 18:54:02.907205   24262 system_pods.go:61] "kube-apiserver-ha-912667" [a8339e9c-d67f-4e84-ba79-754ad86fdf82] Running
	I0425 18:54:02.907209   24262 system_pods.go:61] "kube-apiserver-ha-912667-m02" [a420b2a1-207a-435f-98d2-893836a60e78] Running
	I0425 18:54:02.907212   24262 system_pods.go:61] "kube-apiserver-ha-912667-m03" [57c42509-6b00-4e6c-aec0-2780dcb8287e] Running
	I0425 18:54:02.907216   24262 system_pods.go:61] "kube-controller-manager-ha-912667" [6a91aebd-e142-4165-8acb-cc4c49a5df54] Running
	I0425 18:54:02.907219   24262 system_pods.go:61] "kube-controller-manager-ha-912667-m02" [e94e1a60-af79-4a8e-ac11-e7d36c3d68a3] Running
	I0425 18:54:02.907222   24262 system_pods.go:61] "kube-controller-manager-ha-912667-m03" [ed05c95f-7f91-4849-bbf6-0f140d571a46] Running
	I0425 18:54:02.907226   24262 system_pods.go:61] "kube-proxy-9zxln" [96e7485d-d971-49f2-9505-731cdf2f23ab] Running
	I0425 18:54:02.907231   24262 system_pods.go:61] "kube-proxy-mkgv5" [7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a] Running
	I0425 18:54:02.907235   24262 system_pods.go:61] "kube-proxy-rkbcp" [c62d3486-15d6-4398-a397-2f542d8fb074] Running
	I0425 18:54:02.907241   24262 system_pods.go:61] "kube-scheduler-ha-912667" [7dc33762-4bee-467e-9db4-d783ffe04992] Running
	I0425 18:54:02.907249   24262 system_pods.go:61] "kube-scheduler-ha-912667-m02" [d2ab7cf9-3cd9-4b0b-aec1-26aee5cf3b2a] Running
	I0425 18:54:02.907254   24262 system_pods.go:61] "kube-scheduler-ha-912667-m03" [f42a0409-358a-412a-a20e-0dd00e4e7fe3] Running
	I0425 18:54:02.907262   24262 system_pods.go:61] "kube-vip-ha-912667" [bd3267a7-206d-4e47-b154-a7f17a492684] Running
	I0425 18:54:02.907267   24262 system_pods.go:61] "kube-vip-ha-912667-m02" [c0622f7e-0264-4168-b510-7563083cc9d3] Running
	I0425 18:54:02.907274   24262 system_pods.go:61] "kube-vip-ha-912667-m03" [206ce495-8d7a-404d-ba1a-34edfa189d10] Running
	I0425 18:54:02.907279   24262 system_pods.go:61] "storage-provisioner" [f3a0b111-609d-49b3-a056-71eb4b641224] Running
	I0425 18:54:02.907290   24262 system_pods.go:74] duration metric: took 192.54719ms to wait for pod list to return data ...
	I0425 18:54:02.907303   24262 default_sa.go:34] waiting for default service account to be created ...
	I0425 18:54:03.087577   24262 request.go:629] Waited for 180.195404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/default/serviceaccounts
	I0425 18:54:03.087632   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/default/serviceaccounts
	I0425 18:54:03.087637   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:03.087644   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:03.087648   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:03.091310   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:03.091439   24262 default_sa.go:45] found service account: "default"
	I0425 18:54:03.091457   24262 default_sa.go:55] duration metric: took 184.144541ms for default service account to be created ...
	I0425 18:54:03.091469   24262 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 18:54:03.287883   24262 request.go:629] Waited for 196.339848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:54:03.287947   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:54:03.287955   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:03.287978   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:03.287985   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:03.296722   24262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0425 18:54:03.303343   24262 system_pods.go:86] 24 kube-system pods found
	I0425 18:54:03.303368   24262 system_pods.go:89] "coredns-7db6d8ff4d-22wvx" [56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e] Running
	I0425 18:54:03.303373   24262 system_pods.go:89] "coredns-7db6d8ff4d-h4s2h" [f9e2233c-5350-47ab-bdae-6fa35972b601] Running
	I0425 18:54:03.303378   24262 system_pods.go:89] "etcd-ha-912667" [d18fe5ec-655e-4da4-b8de-782eef846d55] Running
	I0425 18:54:03.303383   24262 system_pods.go:89] "etcd-ha-912667-m02" [8d6782f6-b00b-4d10-8a3a-452460974164] Running
	I0425 18:54:03.303387   24262 system_pods.go:89] "etcd-ha-912667-m03" [24ac9b8b-9f01-4edb-b82d-8bca7df1a74f] Running
	I0425 18:54:03.303391   24262 system_pods.go:89] "kindnet-gcbv6" [03aab1af-e03a-4ff7-bb92-6d22c1dd8d2a] Running
	I0425 18:54:03.303395   24262 system_pods.go:89] "kindnet-sq4lb" [049d5dc9-13ec-4135-8785-229071e57d1a] Running
	I0425 18:54:03.303398   24262 system_pods.go:89] "kindnet-xlvjt" [191ff28e-07d7-459e-afe5-e3d8c23e1016] Running
	I0425 18:54:03.303403   24262 system_pods.go:89] "kube-apiserver-ha-912667" [a8339e9c-d67f-4e84-ba79-754ad86fdf82] Running
	I0425 18:54:03.303407   24262 system_pods.go:89] "kube-apiserver-ha-912667-m02" [a420b2a1-207a-435f-98d2-893836a60e78] Running
	I0425 18:54:03.303411   24262 system_pods.go:89] "kube-apiserver-ha-912667-m03" [57c42509-6b00-4e6c-aec0-2780dcb8287e] Running
	I0425 18:54:03.303416   24262 system_pods.go:89] "kube-controller-manager-ha-912667" [6a91aebd-e142-4165-8acb-cc4c49a5df54] Running
	I0425 18:54:03.303421   24262 system_pods.go:89] "kube-controller-manager-ha-912667-m02" [e94e1a60-af79-4a8e-ac11-e7d36c3d68a3] Running
	I0425 18:54:03.303425   24262 system_pods.go:89] "kube-controller-manager-ha-912667-m03" [ed05c95f-7f91-4849-bbf6-0f140d571a46] Running
	I0425 18:54:03.303428   24262 system_pods.go:89] "kube-proxy-9zxln" [96e7485d-d971-49f2-9505-731cdf2f23ab] Running
	I0425 18:54:03.303432   24262 system_pods.go:89] "kube-proxy-mkgv5" [7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a] Running
	I0425 18:54:03.303435   24262 system_pods.go:89] "kube-proxy-rkbcp" [c62d3486-15d6-4398-a397-2f542d8fb074] Running
	I0425 18:54:03.303439   24262 system_pods.go:89] "kube-scheduler-ha-912667" [7dc33762-4bee-467e-9db4-d783ffe04992] Running
	I0425 18:54:03.303446   24262 system_pods.go:89] "kube-scheduler-ha-912667-m02" [d2ab7cf9-3cd9-4b0b-aec1-26aee5cf3b2a] Running
	I0425 18:54:03.303449   24262 system_pods.go:89] "kube-scheduler-ha-912667-m03" [f42a0409-358a-412a-a20e-0dd00e4e7fe3] Running
	I0425 18:54:03.303452   24262 system_pods.go:89] "kube-vip-ha-912667" [bd3267a7-206d-4e47-b154-a7f17a492684] Running
	I0425 18:54:03.303456   24262 system_pods.go:89] "kube-vip-ha-912667-m02" [c0622f7e-0264-4168-b510-7563083cc9d3] Running
	I0425 18:54:03.303459   24262 system_pods.go:89] "kube-vip-ha-912667-m03" [206ce495-8d7a-404d-ba1a-34edfa189d10] Running
	I0425 18:54:03.303465   24262 system_pods.go:89] "storage-provisioner" [f3a0b111-609d-49b3-a056-71eb4b641224] Running
	I0425 18:54:03.303470   24262 system_pods.go:126] duration metric: took 211.992421ms to wait for k8s-apps to be running ...
	I0425 18:54:03.303477   24262 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 18:54:03.303518   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:54:03.320069   24262 system_svc.go:56] duration metric: took 16.581113ms WaitForService to wait for kubelet
	I0425 18:54:03.320104   24262 kubeadm.go:576] duration metric: took 16.079199643s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 18:54:03.320125   24262 node_conditions.go:102] verifying NodePressure condition ...
	I0425 18:54:03.487802   24262 request.go:629] Waited for 167.588279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes
	I0425 18:54:03.487856   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes
	I0425 18:54:03.487862   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:03.487873   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:03.487882   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:03.492855   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:03.494180   24262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:54:03.494200   24262 node_conditions.go:123] node cpu capacity is 2
	I0425 18:54:03.494222   24262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:54:03.494228   24262 node_conditions.go:123] node cpu capacity is 2
	I0425 18:54:03.494234   24262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:54:03.494239   24262 node_conditions.go:123] node cpu capacity is 2
	I0425 18:54:03.494246   24262 node_conditions.go:105] duration metric: took 174.114337ms to run NodePressure ...
	I0425 18:54:03.494264   24262 start.go:240] waiting for startup goroutines ...
	I0425 18:54:03.494294   24262 start.go:254] writing updated cluster config ...
	I0425 18:54:03.494573   24262 ssh_runner.go:195] Run: rm -f paused
	I0425 18:54:03.545098   24262 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 18:54:03.547863   24262 out.go:177] * Done! kubectl is now configured to use "ha-912667" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.266899470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1adf95cd-a46b-49ac-809f-78ba95724b14 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.267123900Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071248602377761,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e68b1816950df1006cebe8ba8db228e4e894845505ce347266259b3e578daa,PodSandboxId:7f6b143ce4ab2496004c7e5c543759e65ce5ab68f51036cc9424cfd815f8b89f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071035239874404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034742556547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034727843572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-53
50-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47cf3b242de510131bcf58c4eead7934b5a457fa0fd6dc02c0376efb92cbd562,PodSandboxId:f26340b588292da1834879078cdffa8cf368a5c6832c6c9592659eaa2df3cc69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17140710
32863913405,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071032735256490,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24e946cc9871d59976b6e84efd38da336416d3442e75673080a8e5eb92ed6d4,PodSandboxId:d178c1dd267a0a71baecb334e62c5374a33e11b56ca0eed9f3aa0842d1a38ef7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071015803981933,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e92de90328c0d5bf0b78a6487dd065,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071012727880319,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98,PodSandboxId:7e20b6240b0cfc83339d367844cb1a47456b01ad53b8c97f3164eea50b34e875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071012693991926,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071012719200136,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353,PodSandboxId:73c1b7bec4c78211248abec36ca14f9fdf1fec9bf80bd4e86fa940f45b3ed05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071012685351732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1adf95cd-a46b-49ac-809f-78ba95724b14 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.310978327Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f63c456e-64f5-43b6-b46f-3fe3b5525b14 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.311056194Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f63c456e-64f5-43b6-b46f-3fe3b5525b14 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.312394747Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4972ccd3-17d5-48db-95c2-7de84175cffd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.312965419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714071457312938859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4972ccd3-17d5-48db-95c2-7de84175cffd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.313456836Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ce1ca6e-6118-447f-9807-3902a647eca4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.313541517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ce1ca6e-6118-447f-9807-3902a647eca4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.313852889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071248602377761,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e68b1816950df1006cebe8ba8db228e4e894845505ce347266259b3e578daa,PodSandboxId:7f6b143ce4ab2496004c7e5c543759e65ce5ab68f51036cc9424cfd815f8b89f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071035239874404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034742556547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034727843572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-53
50-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47cf3b242de510131bcf58c4eead7934b5a457fa0fd6dc02c0376efb92cbd562,PodSandboxId:f26340b588292da1834879078cdffa8cf368a5c6832c6c9592659eaa2df3cc69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17140710
32863913405,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071032735256490,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24e946cc9871d59976b6e84efd38da336416d3442e75673080a8e5eb92ed6d4,PodSandboxId:d178c1dd267a0a71baecb334e62c5374a33e11b56ca0eed9f3aa0842d1a38ef7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071015803981933,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e92de90328c0d5bf0b78a6487dd065,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071012727880319,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98,PodSandboxId:7e20b6240b0cfc83339d367844cb1a47456b01ad53b8c97f3164eea50b34e875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071012693991926,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071012719200136,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353,PodSandboxId:73c1b7bec4c78211248abec36ca14f9fdf1fec9bf80bd4e86fa940f45b3ed05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071012685351732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ce1ca6e-6118-447f-9807-3902a647eca4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.339384123Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=e5dade62-3ed1-4392-b471-bd0d65ec7ea8 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.339492421Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5dade62-3ed1-4392-b471-bd0d65ec7ea8 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.354417511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5af6c71-7c5b-466e-97aa-b59aaeabdd7f name=/runtime.v1.RuntimeService/Version
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.354519352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5af6c71-7c5b-466e-97aa-b59aaeabdd7f name=/runtime.v1.RuntimeService/Version
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.355378279Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80bb3ae5-4f34-4dcc-84df-d34dced814cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.355894801Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714071457355872931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80bb3ae5-4f34-4dcc-84df-d34dced814cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.356321834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1ea712e-030b-48da-955b-2f980f1ffddf name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.356404472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1ea712e-030b-48da-955b-2f980f1ffddf name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.356627034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071248602377761,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e68b1816950df1006cebe8ba8db228e4e894845505ce347266259b3e578daa,PodSandboxId:7f6b143ce4ab2496004c7e5c543759e65ce5ab68f51036cc9424cfd815f8b89f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071035239874404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034742556547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034727843572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-53
50-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47cf3b242de510131bcf58c4eead7934b5a457fa0fd6dc02c0376efb92cbd562,PodSandboxId:f26340b588292da1834879078cdffa8cf368a5c6832c6c9592659eaa2df3cc69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17140710
32863913405,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071032735256490,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24e946cc9871d59976b6e84efd38da336416d3442e75673080a8e5eb92ed6d4,PodSandboxId:d178c1dd267a0a71baecb334e62c5374a33e11b56ca0eed9f3aa0842d1a38ef7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071015803981933,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e92de90328c0d5bf0b78a6487dd065,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071012727880319,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98,PodSandboxId:7e20b6240b0cfc83339d367844cb1a47456b01ad53b8c97f3164eea50b34e875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071012693991926,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071012719200136,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353,PodSandboxId:73c1b7bec4c78211248abec36ca14f9fdf1fec9bf80bd4e86fa940f45b3ed05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071012685351732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1ea712e-030b-48da-955b-2f980f1ffddf name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.406797354Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4bf6135c-b191-4571-983e-8d384081a345 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.406972885Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4bf6135c-b191-4571-983e-8d384081a345 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.410525681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b756126b-3b8c-43ea-b8a5-53c48e1b5f14 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.411360456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714071457411335414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b756126b-3b8c-43ea-b8a5-53c48e1b5f14 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.412150053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d44abd8-874a-4f34-86da-4e82e6bd8388 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.412207490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d44abd8-874a-4f34-86da-4e82e6bd8388 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:57:37 ha-912667 crio[681]: time="2024-04-25 18:57:37.412429878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071248602377761,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e68b1816950df1006cebe8ba8db228e4e894845505ce347266259b3e578daa,PodSandboxId:7f6b143ce4ab2496004c7e5c543759e65ce5ab68f51036cc9424cfd815f8b89f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071035239874404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034742556547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034727843572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-53
50-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47cf3b242de510131bcf58c4eead7934b5a457fa0fd6dc02c0376efb92cbd562,PodSandboxId:f26340b588292da1834879078cdffa8cf368a5c6832c6c9592659eaa2df3cc69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17140710
32863913405,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071032735256490,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24e946cc9871d59976b6e84efd38da336416d3442e75673080a8e5eb92ed6d4,PodSandboxId:d178c1dd267a0a71baecb334e62c5374a33e11b56ca0eed9f3aa0842d1a38ef7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071015803981933,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e92de90328c0d5bf0b78a6487dd065,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071012727880319,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98,PodSandboxId:7e20b6240b0cfc83339d367844cb1a47456b01ad53b8c97f3164eea50b34e875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071012693991926,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071012719200136,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353,PodSandboxId:73c1b7bec4c78211248abec36ca14f9fdf1fec9bf80bd4e86fa940f45b3ed05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071012685351732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d44abd8-874a-4f34-86da-4e82e6bd8388 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb806d6102b91       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4a7d7ef3e980e       busybox-fc5497c4f-nxhjn
	38e68b1816950       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   7f6b143ce4ab2       storage-provisioner
	5b5e973107f16       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   5f41aaba12a45       coredns-7db6d8ff4d-22wvx
	877510603b828       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   7eff20f80efe1       coredns-7db6d8ff4d-h4s2h
	47cf3b242de51       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   f26340b588292       kindnet-xlvjt
	35f0443a12a2f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago       Running             kube-proxy                0                   56d2b6ff099a0       kube-proxy-mkgv5
	e24e946cc9871       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   d178c1dd267a0       kube-vip-ha-912667
	6d0da8d06f797       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago       Running             kube-scheduler            0                   10902ac1c9f4f       kube-scheduler-ha-912667
	860c8d827dba6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   b27e008a10a06       etcd-ha-912667
	9c0bd11b87eb3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago       Running             kube-controller-manager   0                   7e20b6240b0cf       kube-controller-manager-ha-912667
	8ab9c0712a08a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago       Running             kube-apiserver            0                   73c1b7bec4c78       kube-apiserver-ha-912667
	
	
	==> coredns [5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50659 - 8179 "HINFO IN 4082603258215062617.8291093497106509912. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013059871s
	[INFO] 10.244.2.2:40968 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.005406616s
	[INFO] 10.244.2.2:35686 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.005142825s
	[INFO] 10.244.0.4:32831 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001738929s
	[INFO] 10.244.1.2:38408 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00017538s
	[INFO] 10.244.2.2:37503 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003970142s
	[INFO] 10.244.2.2:40887 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218678s
	[INFO] 10.244.0.4:49981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001952122s
	[INFO] 10.244.0.4:56986 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183129s
	[INFO] 10.244.0.4:33316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126163s
	[INFO] 10.244.1.2:34817 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000365634s
	[INFO] 10.244.1.2:38909 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001350261s
	[INFO] 10.244.1.2:51802 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101088s
	[INFO] 10.244.2.2:47175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020899s
	[INFO] 10.244.2.2:46654 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000319039s
	[INFO] 10.244.2.2:36020 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135369s
	[INFO] 10.244.1.2:58245 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248988s
	[INFO] 10.244.1.2:45237 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202978s
	[INFO] 10.244.0.4:52108 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149798s
	[INFO] 10.244.0.4:52793 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093152s
	[INFO] 10.244.1.2:57128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187429s
	[INFO] 10.244.1.2:40536 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186246s
	[INFO] 10.244.1.2:52690 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120066s
	
	
	==> coredns [877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275] <==
	[INFO] 10.244.2.2:46440 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000251173s
	[INFO] 10.244.0.4:46858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189598s
	[INFO] 10.244.0.4:39745 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154962s
	[INFO] 10.244.0.4:50677 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098624s
	[INFO] 10.244.0.4:47040 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001411651s
	[INFO] 10.244.0.4:51578 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122143s
	[INFO] 10.244.1.2:40259 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165953s
	[INFO] 10.244.1.2:39729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001829607s
	[INFO] 10.244.1.2:34733 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172404s
	[INFO] 10.244.1.2:45725 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129433s
	[INFO] 10.244.1.2:35820 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133249s
	[INFO] 10.244.2.2:40405 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00168841s
	[INFO] 10.244.0.4:40751 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000295717s
	[INFO] 10.244.0.4:35528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102349s
	[INFO] 10.244.0.4:36374 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00035359s
	[INFO] 10.244.0.4:51732 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098091s
	[INFO] 10.244.1.2:41291 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000329271s
	[INFO] 10.244.1.2:36756 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159777s
	[INFO] 10.244.2.2:54364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000374806s
	[INFO] 10.244.2.2:35469 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0003009s
	[INFO] 10.244.2.2:57557 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000412395s
	[INFO] 10.244.2.2:55375 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188342s
	[INFO] 10.244.0.4:50283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136579s
	[INFO] 10.244.0.4:60253 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000062518s
	[INFO] 10.244.1.2:48368 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000591883s
	
	
	==> describe nodes <==
	Name:               ha-912667
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T18_50_19_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:50:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 18:57:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 18:54:23 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 18:54:23 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 18:54:23 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 18:54:23 +0000   Thu, 25 Apr 2024 18:50:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    ha-912667
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3a8edadaa67460ebdc313c0c3e1c3f7
	  System UUID:                a3a8edad-aa67-460e-bdc3-13c0c3e1c3f7
	  Boot ID:                    dc005c29-5a5e-4df7-8967-c057d8b3aa0a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nxhjn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 coredns-7db6d8ff4d-22wvx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m5s
	  kube-system                 coredns-7db6d8ff4d-h4s2h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m5s
	  kube-system                 etcd-ha-912667                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m19s
	  kube-system                 kindnet-xlvjt                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m6s
	  kube-system                 kube-apiserver-ha-912667             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-controller-manager-ha-912667    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-proxy-mkgv5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	  kube-system                 kube-scheduler-ha-912667             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-vip-ha-912667                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m4s                   kube-proxy       
	  Normal  Starting                 7m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m25s (x7 over 7m26s)  kubelet          Node ha-912667 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m25s (x8 over 7m26s)  kubelet          Node ha-912667 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m25s (x8 over 7m26s)  kubelet          Node ha-912667 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m19s                  kubelet          Node ha-912667 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m19s                  kubelet          Node ha-912667 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m19s                  kubelet          Node ha-912667 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m7s                   node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal  NodeReady                7m3s                   kubelet          Node ha-912667 status is now: NodeReady
	  Normal  RegisteredNode           4m49s                  node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal  RegisteredNode           3m36s                  node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	
	
	Name:               ha-912667-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T18_52_33_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:52:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 18:55:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 25 Apr 2024 18:54:33 +0000   Thu, 25 Apr 2024 18:55:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 25 Apr 2024 18:54:33 +0000   Thu, 25 Apr 2024 18:55:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 25 Apr 2024 18:54:33 +0000   Thu, 25 Apr 2024 18:55:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 25 Apr 2024 18:54:33 +0000   Thu, 25 Apr 2024 18:55:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.66
	  Hostname:    ha-912667-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82894439088e4cc98841c062c296fef3
	  System UUID:                82894439-088e-4cc9-8841-c062c296fef3
	  Boot ID:                    a05283e8-2146-4bc2-bd15-7ae5e2b51bec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tcxzk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-912667-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m5s
	  kube-system                 kindnet-sq4lb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m7s
	  kube-system                 kube-apiserver-ha-912667-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-controller-manager-ha-912667-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-proxy-rkbcp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-scheduler-ha-912667-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-vip-ha-912667-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m2s                 kube-proxy       
	  Normal  RegisteredNode           5m7s                 node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m7s (x8 over 5m7s)  kubelet          Node ha-912667-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x8 over 5m7s)  kubelet          Node ha-912667-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x7 over 5m7s)  kubelet          Node ha-912667-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m49s                node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  RegisteredNode           3m36s                node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  NodeNotReady             101s                 node-controller  Node ha-912667-m02 status is now: NodeNotReady
	
	
	Name:               ha-912667-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T18_53_46_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:53:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 18:57:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 18:54:13 +0000   Thu, 25 Apr 2024 18:53:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 18:54:13 +0000   Thu, 25 Apr 2024 18:53:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 18:54:13 +0000   Thu, 25 Apr 2024 18:53:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 18:54:13 +0000   Thu, 25 Apr 2024 18:53:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    ha-912667-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b314db0e66974911a4c3c03513ed8a46
	  System UUID:                b314db0e-6697-4911-a4c3-c03513ed8a46
	  Boot ID:                    00746489-af97-4229-a221-4ab46c60d093
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6lkjg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 etcd-ha-912667-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m53s
	  kube-system                 kindnet-gcbv6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m55s
	  kube-system                 kube-apiserver-ha-912667-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-controller-manager-ha-912667-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-proxy-9zxln                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-scheduler-ha-912667-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-vip-ha-912667-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m55s (x8 over 3m55s)  kubelet          Node ha-912667-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s (x8 over 3m55s)  kubelet          Node ha-912667-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s (x7 over 3m55s)  kubelet          Node ha-912667-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	  Normal  RegisteredNode           3m36s                  node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	
	
	Name:               ha-912667-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T18_54_45_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:54:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 18:57:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 18:55:15 +0000   Thu, 25 Apr 2024 18:54:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 18:55:15 +0000   Thu, 25 Apr 2024 18:54:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 18:55:15 +0000   Thu, 25 Apr 2024 18:54:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 18:55:15 +0000   Thu, 25 Apr 2024 18:54:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-912667-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6d1da6a42954aa3b31899cd270783aa
	  System UUID:                c6d1da6a-4295-4aa3-b318-99cd270783aa
	  Boot ID:                    1273025a-2c47-413b-acda-da649c6acca7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4l974       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-64vg4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x2 over 2m53s)  kubelet          Node ha-912667-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x2 over 2m53s)  kubelet          Node ha-912667-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x2 over 2m53s)  kubelet          Node ha-912667-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-912667-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr25 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054310] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044068] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.656449] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.562589] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.723180] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr25 18:50] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.058108] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076447] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.197185] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.122034] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.313908] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.923241] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.067466] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.659823] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.462418] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.581179] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.076665] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.874397] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.005828] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f] <==
	{"level":"warn","ts":"2024-04-25T18:57:37.675773Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.715613Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.72513Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.733243Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.737095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.755526Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.767386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.776565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.780224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.784483Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.792354Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.799479Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.808597Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.812377Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.815821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.816105Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.824171Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.831165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.841037Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.844991Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.848663Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.857456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.863307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.876011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:57:37.916304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:57:37 up 7 min,  0 users,  load average: 0.45, 0.44, 0.23
	Linux ha-912667 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [47cf3b242de510131bcf58c4eead7934b5a457fa0fd6dc02c0376efb92cbd562] <==
	I0425 18:57:04.393314       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 18:57:14.406549       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 18:57:14.406598       1 main.go:227] handling current node
	I0425 18:57:14.406609       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 18:57:14.406616       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 18:57:14.406818       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0425 18:57:14.406852       1 main.go:250] Node ha-912667-m03 has CIDR [10.244.2.0/24] 
	I0425 18:57:14.406913       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 18:57:14.406942       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 18:57:24.422474       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 18:57:24.422519       1 main.go:227] handling current node
	I0425 18:57:24.422540       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 18:57:24.422552       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 18:57:24.422884       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0425 18:57:24.422920       1 main.go:250] Node ha-912667-m03 has CIDR [10.244.2.0/24] 
	I0425 18:57:24.423079       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 18:57:24.423127       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 18:57:34.430858       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 18:57:34.430904       1 main.go:227] handling current node
	I0425 18:57:34.430915       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 18:57:34.430921       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 18:57:34.431021       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0425 18:57:34.431056       1 main.go:250] Node ha-912667-m03 has CIDR [10.244.2.0/24] 
	I0425 18:57:34.431130       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 18:57:34.431171       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353] <==
	I0425 18:50:18.954492       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0425 18:50:18.972333       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0425 18:50:31.068906       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0425 18:50:31.818951       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0425 18:52:31.749227       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0425 18:52:31.749915       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0425 18:52:31.749805       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 146.53µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0425 18:52:31.751150       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0425 18:52:31.752523       1 timeout.go:142] post-timeout activity - time-elapsed: 3.455121ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0425 18:54:10.886575       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51404: use of closed network connection
	E0425 18:54:11.143229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51422: use of closed network connection
	E0425 18:54:11.380769       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51434: use of closed network connection
	E0425 18:54:11.608552       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51456: use of closed network connection
	E0425 18:54:11.822446       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51472: use of closed network connection
	E0425 18:54:12.039290       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51494: use of closed network connection
	E0425 18:54:12.261989       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51512: use of closed network connection
	E0425 18:54:12.479460       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51528: use of closed network connection
	E0425 18:54:12.705049       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51540: use of closed network connection
	E0425 18:54:13.065495       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51584: use of closed network connection
	E0425 18:54:13.298958       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51606: use of closed network connection
	E0425 18:54:13.532622       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51630: use of closed network connection
	E0425 18:54:13.738200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51638: use of closed network connection
	E0425 18:54:13.956410       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51664: use of closed network connection
	E0425 18:54:14.174400       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51692: use of closed network connection
	W0425 18:55:27.545284       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.179 192.168.39.189]
	
	
	==> kube-controller-manager [9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98] <==
	I0425 18:53:42.537776       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-912667-m03" podCIDRs=["10.244.2.0/24"]
	I0425 18:53:45.998575       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-912667-m03"
	I0425 18:54:04.582253       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128.844127ms"
	I0425 18:54:04.740175       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="157.28527ms"
	I0425 18:54:04.930996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="190.747285ms"
	I0425 18:54:04.953877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.546221ms"
	I0425 18:54:04.953987       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.88µs"
	I0425 18:54:05.920469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.67µs"
	I0425 18:54:05.938237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.17µs"
	I0425 18:54:05.948058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.15µs"
	I0425 18:54:08.758524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.970909ms"
	I0425 18:54:08.758846       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.843µs"
	I0425 18:54:08.952131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.477937ms"
	I0425 18:54:08.952325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="109.172µs"
	I0425 18:54:10.391181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.077782ms"
	I0425 18:54:10.391466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.189µs"
	E0425 18:54:44.924095       1 certificate_controller.go:146] Sync csr-k8grv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-k8grv": the object has been modified; please apply your changes to the latest version and try again
	E0425 18:54:44.924380       1 certificate_controller.go:146] Sync csr-k8grv failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-k8grv": the object has been modified; please apply your changes to the latest version and try again
	I0425 18:54:45.200886       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-912667-m04\" does not exist"
	I0425 18:54:45.242043       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-912667-m04" podCIDRs=["10.244.3.0/24"]
	I0425 18:54:46.029288       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-912667-m04"
	I0425 18:54:55.658622       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-912667-m04"
	I0425 18:55:56.074474       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-912667-m04"
	I0425 18:55:56.177160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.526494ms"
	I0425 18:55:56.177994       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.755µs"
	
	
	==> kube-proxy [35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386] <==
	I0425 18:50:33.066573       1 server_linux.go:69] "Using iptables proxy"
	I0425 18:50:33.092210       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.189"]
	I0425 18:50:33.176956       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 18:50:33.177064       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 18:50:33.177082       1 server_linux.go:165] "Using iptables Proxier"
	I0425 18:50:33.181211       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 18:50:33.181406       1 server.go:872] "Version info" version="v1.30.0"
	I0425 18:50:33.181417       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 18:50:33.183895       1 config.go:192] "Starting service config controller"
	I0425 18:50:33.183931       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 18:50:33.183950       1 config.go:101] "Starting endpoint slice config controller"
	I0425 18:50:33.183954       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 18:50:33.184523       1 config.go:319] "Starting node config controller"
	I0425 18:50:33.184529       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 18:50:33.284779       1 shared_informer.go:320] Caches are synced for node config
	I0425 18:50:33.284935       1 shared_informer.go:320] Caches are synced for service config
	I0425 18:50:33.285001       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5] <==
	E0425 18:54:04.525141       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod def1a9f3-c061-480c-9644-abd5c6c37078(default/busybox-fc5497c4f-6lkjg) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-6lkjg"
	E0425 18:54:04.525245       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6lkjg\": pod busybox-fc5497c4f-6lkjg is already assigned to node \"ha-912667-m03\"" pod="default/busybox-fc5497c4f-6lkjg"
	I0425 18:54:04.525314       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-6lkjg" node="ha-912667-m03"
	E0425 18:54:45.301209       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dx5dw\": pod kube-proxy-dx5dw is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dx5dw" node="ha-912667-m04"
	E0425 18:54:45.302601       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dx5dw\": pod kube-proxy-dx5dw is already assigned to node \"ha-912667-m04\"" pod="kube-system/kube-proxy-dx5dw"
	E0425 18:54:45.317163       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4l974\": pod kindnet-4l974 is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4l974" node="ha-912667-m04"
	E0425 18:54:45.322558       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 186c0056-6cc0-4696-b1ed-4d5013b794f6(kube-system/kindnet-4l974) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4l974"
	E0425 18:54:45.326251       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4l974\": pod kindnet-4l974 is already assigned to node \"ha-912667-m04\"" pod="kube-system/kindnet-4l974"
	I0425 18:54:45.326657       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4l974" node="ha-912667-m04"
	E0425 18:54:45.359237       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8dczp\": pod kindnet-8dczp is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-8dczp" node="ha-912667-m04"
	E0425 18:54:45.359625       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1615b53c-82a1-4989-8a5c-73d1ece27d1d(kube-system/kindnet-8dczp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-8dczp"
	E0425 18:54:45.359841       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8dczp\": pod kindnet-8dczp is already assigned to node \"ha-912667-m04\"" pod="kube-system/kindnet-8dczp"
	I0425 18:54:45.359969       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8dczp" node="ha-912667-m04"
	E0425 18:54:45.370471       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6fpnz\": pod kube-proxy-6fpnz is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6fpnz" node="ha-912667-m04"
	E0425 18:54:45.371240       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 50f143aa-15a7-468d-a01b-80259f6b5d9f(kube-system/kube-proxy-6fpnz) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-6fpnz"
	E0425 18:54:45.371330       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6fpnz\": pod kube-proxy-6fpnz is already assigned to node \"ha-912667-m04\"" pod="kube-system/kube-proxy-6fpnz"
	I0425 18:54:45.371405       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6fpnz" node="ha-912667-m04"
	E0425 18:54:45.423818       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tdqkk\": pod kindnet-tdqkk is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tdqkk" node="ha-912667-m04"
	E0425 18:54:45.427112       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 75fe41f7-fcc1-4042-b309-50d32525a2aa(kube-system/kindnet-tdqkk) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tdqkk"
	E0425 18:54:45.427396       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tdqkk\": pod kindnet-tdqkk is already assigned to node \"ha-912667-m04\"" pod="kube-system/kindnet-tdqkk"
	I0425 18:54:45.427574       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tdqkk" node="ha-912667-m04"
	E0425 18:54:45.446814       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-55svm\": pod kube-proxy-55svm is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-55svm" node="ha-912667-m04"
	E0425 18:54:45.447116       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 29859480-e924-4cec-bc56-f342570ee22a(kube-system/kube-proxy-55svm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-55svm"
	E0425 18:54:45.447223       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-55svm\": pod kube-proxy-55svm is already assigned to node \"ha-912667-m04\"" pod="kube-system/kube-proxy-55svm"
	I0425 18:54:45.447369       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-55svm" node="ha-912667-m04"
	
	
	==> kubelet <==
	Apr 25 18:54:05 ha-912667 kubelet[1386]: I0425 18:54:05.806375    1386 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l878\" (UniqueName: \"kubernetes.io/projected/ed27e764-9caf-4038-9b3a-5040b4d006c8-kube-api-access-4l878\") pod \"ed27e764-9caf-4038-9b3a-5040b4d006c8\" (UID: \"ed27e764-9caf-4038-9b3a-5040b4d006c8\") "
	Apr 25 18:54:05 ha-912667 kubelet[1386]: I0425 18:54:05.809825    1386 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed27e764-9caf-4038-9b3a-5040b4d006c8-kube-api-access-4l878" (OuterVolumeSpecName: "kube-api-access-4l878") pod "ed27e764-9caf-4038-9b3a-5040b4d006c8" (UID: "ed27e764-9caf-4038-9b3a-5040b4d006c8"). InnerVolumeSpecName "kube-api-access-4l878". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 25 18:54:05 ha-912667 kubelet[1386]: I0425 18:54:05.907091    1386 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4l878\" (UniqueName: \"kubernetes.io/projected/ed27e764-9caf-4038-9b3a-5040b4d006c8-kube-api-access-4l878\") on node \"ha-912667\" DevicePath \"\""
	Apr 25 18:54:06 ha-912667 kubelet[1386]: I0425 18:54:06.886585    1386 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed27e764-9caf-4038-9b3a-5040b4d006c8" path="/var/lib/kubelet/pods/ed27e764-9caf-4038-9b3a-5040b4d006c8/volumes"
	Apr 25 18:54:13 ha-912667 kubelet[1386]: E0425 18:54:13.738860    1386 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47274->127.0.0.1:38153: write tcp 127.0.0.1:47274->127.0.0.1:38153: write: broken pipe
	Apr 25 18:54:18 ha-912667 kubelet[1386]: E0425 18:54:18.915224    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:54:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:54:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:54:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:54:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 18:55:18 ha-912667 kubelet[1386]: E0425 18:55:18.915306    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:55:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:55:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:55:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:55:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 18:56:18 ha-912667 kubelet[1386]: E0425 18:56:18.917242    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:56:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:56:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:56:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:56:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 18:57:18 ha-912667 kubelet[1386]: E0425 18:57:18.913233    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:57:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:57:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:57:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:57:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-912667 -n ha-912667
helpers_test.go:261: (dbg) Run:  kubectl --context ha-912667 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr: exit status 3 (3.19447865s)

                                                
                                                
-- stdout --
	ha-912667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-912667-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:57:42.548181   29279 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:57:42.548277   29279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:57:42.548283   29279 out.go:304] Setting ErrFile to fd 2...
	I0425 18:57:42.548287   29279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:57:42.548491   29279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:57:42.548645   29279 out.go:298] Setting JSON to false
	I0425 18:57:42.548672   29279 mustload.go:65] Loading cluster: ha-912667
	I0425 18:57:42.548795   29279 notify.go:220] Checking for updates...
	I0425 18:57:42.549024   29279 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:57:42.549037   29279 status.go:255] checking status of ha-912667 ...
	I0425 18:57:42.549408   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:42.549473   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:42.568314   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0425 18:57:42.568752   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:42.569358   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:42.569386   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:42.569679   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:42.569854   29279 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:57:42.571185   29279 status.go:330] ha-912667 host status = "Running" (err=<nil>)
	I0425 18:57:42.571200   29279 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:57:42.571477   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:42.571508   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:42.585468   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45347
	I0425 18:57:42.585856   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:42.586313   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:42.586333   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:42.586655   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:42.586808   29279 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:57:42.589558   29279 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:42.590013   29279 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:57:42.590054   29279 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:42.590189   29279 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:57:42.590596   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:42.590643   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:42.604053   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35819
	I0425 18:57:42.604437   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:42.604853   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:42.604873   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:42.605177   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:42.605422   29279 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:57:42.605607   29279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:42.605630   29279 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:57:42.608326   29279 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:42.608777   29279 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:57:42.608806   29279 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:42.608923   29279 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:57:42.609068   29279 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:57:42.609221   29279 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:57:42.609383   29279 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:57:42.694890   29279 ssh_runner.go:195] Run: systemctl --version
	I0425 18:57:42.702329   29279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:42.720385   29279 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:57:42.720410   29279 api_server.go:166] Checking apiserver status ...
	I0425 18:57:42.720449   29279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:57:42.740596   29279 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0425 18:57:42.752407   29279 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:57:42.752474   29279 ssh_runner.go:195] Run: ls
	I0425 18:57:42.758141   29279 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:57:42.762171   29279 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:57:42.762191   29279 status.go:422] ha-912667 apiserver status = Running (err=<nil>)
	I0425 18:57:42.762221   29279 status.go:257] ha-912667 status: &{Name:ha-912667 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:57:42.762247   29279 status.go:255] checking status of ha-912667-m02 ...
	I0425 18:57:42.762584   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:42.762631   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:42.777427   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
	I0425 18:57:42.777891   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:42.778387   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:42.778409   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:42.778717   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:42.778928   29279 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 18:57:42.780504   29279 status.go:330] ha-912667-m02 host status = "Running" (err=<nil>)
	I0425 18:57:42.780521   29279 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:57:42.780801   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:42.780842   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:42.795477   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41641
	I0425 18:57:42.795938   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:42.796728   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:42.796765   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:42.797281   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:42.797651   29279 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:57:42.800637   29279 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:42.801078   29279 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:57:42.801097   29279 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:42.801226   29279 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:57:42.801621   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:42.801679   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:42.819303   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0425 18:57:42.819669   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:42.820117   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:42.820143   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:42.820402   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:42.820602   29279 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:57:42.820768   29279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:42.820791   29279 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:57:42.823395   29279 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:42.823809   29279 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:57:42.823845   29279 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:42.823990   29279 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:57:42.824177   29279 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:57:42.824328   29279 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:57:42.824468   29279 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	W0425 18:57:45.322560   29279 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.66:22: connect: no route to host
	W0425 18:57:45.322646   29279 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	E0425 18:57:45.322670   29279 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:57:45.322685   29279 status.go:257] ha-912667-m02 status: &{Name:ha-912667-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 18:57:45.322708   29279 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:57:45.322721   29279 status.go:255] checking status of ha-912667-m03 ...
	I0425 18:57:45.323116   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:45.323176   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:45.338461   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0425 18:57:45.338859   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:45.339329   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:45.339357   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:45.339671   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:45.339866   29279 main.go:141] libmachine: (ha-912667-m03) Calling .GetState
	I0425 18:57:45.341553   29279 status.go:330] ha-912667-m03 host status = "Running" (err=<nil>)
	I0425 18:57:45.341569   29279 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:57:45.341860   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:45.341897   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:45.356060   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42731
	I0425 18:57:45.356386   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:45.356815   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:45.356856   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:45.357154   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:45.357336   29279 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:57:45.360242   29279 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:45.360661   29279 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:57:45.360682   29279 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:45.360839   29279 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:57:45.361118   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:45.361155   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:45.375735   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37757
	I0425 18:57:45.376111   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:45.376555   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:45.376574   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:45.376897   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:45.377062   29279 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:57:45.377218   29279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:45.377239   29279 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:57:45.380081   29279 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:45.380503   29279 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:57:45.380540   29279 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:45.380692   29279 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:57:45.380863   29279 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:57:45.381080   29279 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:57:45.381222   29279 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:57:45.467818   29279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:45.493663   29279 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:57:45.493700   29279 api_server.go:166] Checking apiserver status ...
	I0425 18:57:45.493750   29279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:57:45.511046   29279 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0425 18:57:45.522923   29279 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:57:45.522966   29279 ssh_runner.go:195] Run: ls
	I0425 18:57:45.528005   29279 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:57:45.532983   29279 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:57:45.533001   29279 status.go:422] ha-912667-m03 apiserver status = Running (err=<nil>)
	I0425 18:57:45.533008   29279 status.go:257] ha-912667-m03 status: &{Name:ha-912667-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:57:45.533025   29279 status.go:255] checking status of ha-912667-m04 ...
	I0425 18:57:45.533316   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:45.533353   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:45.548267   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42817
	I0425 18:57:45.548613   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:45.549069   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:45.549089   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:45.549352   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:45.549557   29279 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 18:57:45.551002   29279 status.go:330] ha-912667-m04 host status = "Running" (err=<nil>)
	I0425 18:57:45.551016   29279 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:57:45.551280   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:45.551319   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:45.565213   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0425 18:57:45.565627   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:45.566083   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:45.566104   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:45.566452   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:45.566620   29279 main.go:141] libmachine: (ha-912667-m04) Calling .GetIP
	I0425 18:57:45.569114   29279 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:45.569490   29279 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:57:45.569525   29279 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:45.569621   29279 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:57:45.569905   29279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:45.569938   29279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:45.583650   29279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37803
	I0425 18:57:45.583968   29279 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:45.584389   29279 main.go:141] libmachine: Using API Version  1
	I0425 18:57:45.584416   29279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:45.584739   29279 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:45.584899   29279 main.go:141] libmachine: (ha-912667-m04) Calling .DriverName
	I0425 18:57:45.585074   29279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:45.585090   29279 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHHostname
	I0425 18:57:45.587574   29279 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:45.588014   29279 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:57:45.588041   29279 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:45.588179   29279 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHPort
	I0425 18:57:45.588329   29279 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHKeyPath
	I0425 18:57:45.588478   29279 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHUsername
	I0425 18:57:45.588591   29279 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m04/id_rsa Username:docker}
	I0425 18:57:45.675689   29279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:45.690585   29279 status.go:257] ha-912667-m04 status: &{Name:ha-912667-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr: exit status 3 (4.825941804s)

                                                
                                                
-- stdout --
	ha-912667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-912667-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:57:47.053353   29386 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:57:47.053492   29386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:57:47.053503   29386 out.go:304] Setting ErrFile to fd 2...
	I0425 18:57:47.053509   29386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:57:47.053780   29386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:57:47.053952   29386 out.go:298] Setting JSON to false
	I0425 18:57:47.053976   29386 mustload.go:65] Loading cluster: ha-912667
	I0425 18:57:47.054028   29386 notify.go:220] Checking for updates...
	I0425 18:57:47.054440   29386 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:57:47.054461   29386 status.go:255] checking status of ha-912667 ...
	I0425 18:57:47.054991   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:47.055025   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:47.069841   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37135
	I0425 18:57:47.070247   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:47.070824   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:47.070843   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:47.071269   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:47.071501   29386 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:57:47.073346   29386 status.go:330] ha-912667 host status = "Running" (err=<nil>)
	I0425 18:57:47.073363   29386 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:57:47.073621   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:47.073669   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:47.088329   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36941
	I0425 18:57:47.088690   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:47.089153   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:47.089167   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:47.089453   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:47.089631   29386 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:57:47.092304   29386 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:47.092741   29386 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:57:47.092764   29386 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:47.092911   29386 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:57:47.093166   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:47.093203   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:47.107000   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37051
	I0425 18:57:47.107395   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:47.107881   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:47.107900   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:47.108204   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:47.108377   29386 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:57:47.108593   29386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:47.108622   29386 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:57:47.111417   29386 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:47.111826   29386 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:57:47.111874   29386 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:47.112037   29386 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:57:47.112196   29386 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:57:47.112325   29386 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:57:47.112448   29386 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:57:47.199450   29386 ssh_runner.go:195] Run: systemctl --version
	I0425 18:57:47.207090   29386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:47.233566   29386 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:57:47.233593   29386 api_server.go:166] Checking apiserver status ...
	I0425 18:57:47.233629   29386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:57:47.253266   29386 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0425 18:57:47.263231   29386 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:57:47.263284   29386 ssh_runner.go:195] Run: ls
	I0425 18:57:47.269246   29386 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:57:47.275504   29386 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:57:47.275531   29386 status.go:422] ha-912667 apiserver status = Running (err=<nil>)
	I0425 18:57:47.275542   29386 status.go:257] ha-912667 status: &{Name:ha-912667 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:57:47.275557   29386 status.go:255] checking status of ha-912667-m02 ...
	I0425 18:57:47.275951   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:47.275996   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:47.292551   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41277
	I0425 18:57:47.292930   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:47.293458   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:47.293479   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:47.293794   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:47.293971   29386 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 18:57:47.295668   29386 status.go:330] ha-912667-m02 host status = "Running" (err=<nil>)
	I0425 18:57:47.295683   29386 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:57:47.295971   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:47.296010   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:47.310964   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40739
	I0425 18:57:47.311380   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:47.311820   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:47.311845   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:47.312159   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:47.312357   29386 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:57:47.315672   29386 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:47.316111   29386 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:57:47.316137   29386 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:47.316264   29386 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:57:47.316577   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:47.316615   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:47.331930   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0425 18:57:47.332383   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:47.332888   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:47.332916   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:47.333300   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:47.333500   29386 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:57:47.333702   29386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:47.333736   29386 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:57:47.336292   29386 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:47.336712   29386 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:57:47.336726   29386 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:47.336864   29386 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:57:47.337043   29386 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:57:47.337188   29386 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:57:47.337306   29386 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	W0425 18:57:48.394525   29386 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:57:48.394567   29386 retry.go:31] will retry after 312.116058ms: dial tcp 192.168.39.66:22: connect: no route to host
	W0425 18:57:51.466518   29386 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.66:22: connect: no route to host
	W0425 18:57:51.466626   29386 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	E0425 18:57:51.466651   29386 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:57:51.466665   29386 status.go:257] ha-912667-m02 status: &{Name:ha-912667-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 18:57:51.466700   29386 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:57:51.466714   29386 status.go:255] checking status of ha-912667-m03 ...
	I0425 18:57:51.467002   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:51.467051   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:51.481366   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40151
	I0425 18:57:51.481846   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:51.482353   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:51.482379   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:51.482684   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:51.482865   29386 main.go:141] libmachine: (ha-912667-m03) Calling .GetState
	I0425 18:57:51.484443   29386 status.go:330] ha-912667-m03 host status = "Running" (err=<nil>)
	I0425 18:57:51.484456   29386 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:57:51.484763   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:51.484796   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:51.499553   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44417
	I0425 18:57:51.499912   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:51.500313   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:51.500357   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:51.500693   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:51.500876   29386 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:57:51.503583   29386 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:51.503984   29386 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:57:51.504021   29386 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:51.504140   29386 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:57:51.504430   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:51.504464   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:51.517903   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37591
	I0425 18:57:51.518266   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:51.518691   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:51.518709   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:51.519014   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:51.519220   29386 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:57:51.519393   29386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:51.519417   29386 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:57:51.522388   29386 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:51.522788   29386 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:57:51.522814   29386 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:51.522977   29386 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:57:51.523106   29386 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:57:51.523295   29386 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:57:51.523442   29386 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:57:51.606658   29386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:51.624771   29386 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:57:51.624804   29386 api_server.go:166] Checking apiserver status ...
	I0425 18:57:51.624840   29386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:57:51.640928   29386 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0425 18:57:51.653048   29386 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:57:51.653098   29386 ssh_runner.go:195] Run: ls
	I0425 18:57:51.658494   29386 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:57:51.663390   29386 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:57:51.663412   29386 status.go:422] ha-912667-m03 apiserver status = Running (err=<nil>)
	I0425 18:57:51.663421   29386 status.go:257] ha-912667-m03 status: &{Name:ha-912667-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:57:51.663435   29386 status.go:255] checking status of ha-912667-m04 ...
	I0425 18:57:51.663708   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:51.663751   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:51.678887   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36963
	I0425 18:57:51.679281   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:51.679719   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:51.679744   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:51.680051   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:51.680238   29386 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 18:57:51.681798   29386 status.go:330] ha-912667-m04 host status = "Running" (err=<nil>)
	I0425 18:57:51.681813   29386 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:57:51.682224   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:51.682265   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:51.697180   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39897
	I0425 18:57:51.697674   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:51.698153   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:51.698179   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:51.698506   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:51.698754   29386 main.go:141] libmachine: (ha-912667-m04) Calling .GetIP
	I0425 18:57:51.701426   29386 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:51.701820   29386 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:57:51.701851   29386 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:51.702000   29386 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:57:51.702305   29386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:51.702338   29386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:51.718102   29386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
	I0425 18:57:51.718467   29386 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:51.718915   29386 main.go:141] libmachine: Using API Version  1
	I0425 18:57:51.718935   29386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:51.719222   29386 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:51.719409   29386 main.go:141] libmachine: (ha-912667-m04) Calling .DriverName
	I0425 18:57:51.719570   29386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:51.719591   29386 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHHostname
	I0425 18:57:51.722127   29386 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:51.722545   29386 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:57:51.722569   29386 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:51.722686   29386 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHPort
	I0425 18:57:51.722836   29386 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHKeyPath
	I0425 18:57:51.722982   29386 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHUsername
	I0425 18:57:51.723111   29386 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m04/id_rsa Username:docker}
	I0425 18:57:51.806234   29386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:51.821702   29386 status.go:257] ha-912667-m04 status: &{Name:ha-912667-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr: exit status 3 (4.825248514s)

                                                
                                                
-- stdout --
	ha-912667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-912667-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:57:53.329515   29488 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:57:53.329765   29488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:57:53.329775   29488 out.go:304] Setting ErrFile to fd 2...
	I0425 18:57:53.329781   29488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:57:53.329955   29488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:57:53.330134   29488 out.go:298] Setting JSON to false
	I0425 18:57:53.330165   29488 mustload.go:65] Loading cluster: ha-912667
	I0425 18:57:53.330264   29488 notify.go:220] Checking for updates...
	I0425 18:57:53.330609   29488 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:57:53.330630   29488 status.go:255] checking status of ha-912667 ...
	I0425 18:57:53.331055   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:53.331114   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:53.347208   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46361
	I0425 18:57:53.347548   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:53.348075   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:53.348100   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:53.348448   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:53.348668   29488 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:57:53.350116   29488 status.go:330] ha-912667 host status = "Running" (err=<nil>)
	I0425 18:57:53.350129   29488 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:57:53.350446   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:53.350488   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:53.364596   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38041
	I0425 18:57:53.365056   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:53.365515   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:53.365533   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:53.365847   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:53.366004   29488 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:57:53.368627   29488 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:53.369012   29488 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:57:53.369036   29488 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:53.369198   29488 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:57:53.369621   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:53.369669   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:53.383794   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41363
	I0425 18:57:53.384188   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:53.384623   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:53.384645   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:53.384958   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:53.385130   29488 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:57:53.385308   29488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:53.385326   29488 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:57:53.387933   29488 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:53.388343   29488 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:57:53.388369   29488 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:57:53.388518   29488 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:57:53.388695   29488 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:57:53.388837   29488 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:57:53.388984   29488 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:57:53.477330   29488 ssh_runner.go:195] Run: systemctl --version
	I0425 18:57:53.484377   29488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:53.502945   29488 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:57:53.502975   29488 api_server.go:166] Checking apiserver status ...
	I0425 18:57:53.503007   29488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:57:53.519634   29488 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0425 18:57:53.533334   29488 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:57:53.533421   29488 ssh_runner.go:195] Run: ls
	I0425 18:57:53.539770   29488 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:57:53.546452   29488 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:57:53.546477   29488 status.go:422] ha-912667 apiserver status = Running (err=<nil>)
	I0425 18:57:53.546489   29488 status.go:257] ha-912667 status: &{Name:ha-912667 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:57:53.546508   29488 status.go:255] checking status of ha-912667-m02 ...
	I0425 18:57:53.546896   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:53.546939   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:53.564456   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I0425 18:57:53.564880   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:53.565320   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:53.565348   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:53.565715   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:53.565933   29488 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 18:57:53.567351   29488 status.go:330] ha-912667-m02 host status = "Running" (err=<nil>)
	I0425 18:57:53.567376   29488 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:57:53.567676   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:53.567714   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:53.582348   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45037
	I0425 18:57:53.582753   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:53.583213   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:53.583241   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:53.583617   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:53.583802   29488 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:57:53.586545   29488 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:53.586986   29488 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:57:53.587011   29488 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:53.587134   29488 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:57:53.587431   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:53.587472   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:53.603146   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43551
	I0425 18:57:53.603600   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:53.604120   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:53.604152   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:53.604522   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:53.604714   29488 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:57:53.604892   29488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:53.604910   29488 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:57:53.607732   29488 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:53.608193   29488 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:57:53.608218   29488 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:57:53.608393   29488 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:57:53.608761   29488 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:57:53.608916   29488 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:57:53.609056   29488 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	W0425 18:57:54.542449   29488 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:57:54.542490   29488 retry.go:31] will retry after 140.312194ms: dial tcp 192.168.39.66:22: connect: no route to host
	W0425 18:57:57.738423   29488 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.66:22: connect: no route to host
	W0425 18:57:57.738522   29488 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	E0425 18:57:57.738540   29488 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:57:57.738547   29488 status.go:257] ha-912667-m02 status: &{Name:ha-912667-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 18:57:57.738568   29488 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:57:57.738580   29488 status.go:255] checking status of ha-912667-m03 ...
	I0425 18:57:57.739018   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:57.739064   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:57.755176   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I0425 18:57:57.755680   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:57.756192   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:57.756213   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:57.756523   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:57.756705   29488 main.go:141] libmachine: (ha-912667-m03) Calling .GetState
	I0425 18:57:57.758486   29488 status.go:330] ha-912667-m03 host status = "Running" (err=<nil>)
	I0425 18:57:57.758503   29488 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:57:57.758922   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:57.758969   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:57.773886   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39581
	I0425 18:57:57.774337   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:57.774747   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:57.774770   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:57.775124   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:57.775314   29488 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:57:57.778316   29488 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:57.778734   29488 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:57:57.778759   29488 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:57.778892   29488 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:57:57.779187   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:57.779221   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:57.793290   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40059
	I0425 18:57:57.793699   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:57.794123   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:57.794143   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:57.794479   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:57.794639   29488 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:57:57.794851   29488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:57.794871   29488 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:57:57.797332   29488 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:57.797741   29488 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:57:57.797768   29488 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:57:57.797909   29488 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:57:57.798091   29488 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:57:57.798256   29488 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:57:57.798502   29488 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:57:57.883113   29488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:57.899115   29488 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:57:57.899141   29488 api_server.go:166] Checking apiserver status ...
	I0425 18:57:57.899173   29488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:57:57.914641   29488 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0425 18:57:57.926407   29488 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:57:57.926471   29488 ssh_runner.go:195] Run: ls
	I0425 18:57:57.932154   29488 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:57:57.938416   29488 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:57:57.938449   29488 status.go:422] ha-912667-m03 apiserver status = Running (err=<nil>)
	I0425 18:57:57.938474   29488 status.go:257] ha-912667-m03 status: &{Name:ha-912667-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:57:57.938498   29488 status.go:255] checking status of ha-912667-m04 ...
	I0425 18:57:57.938865   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:57.938902   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:57.954086   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0425 18:57:57.954546   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:57.955055   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:57.955083   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:57.955399   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:57.955575   29488 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 18:57:57.957160   29488 status.go:330] ha-912667-m04 host status = "Running" (err=<nil>)
	I0425 18:57:57.957177   29488 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:57:57.957503   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:57.957548   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:57.972530   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44471
	I0425 18:57:57.973074   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:57.973576   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:57.973603   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:57.973960   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:57.974161   29488 main.go:141] libmachine: (ha-912667-m04) Calling .GetIP
	I0425 18:57:57.977073   29488 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:57.977563   29488 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:57:57.977584   29488 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:57.977749   29488 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:57:57.978070   29488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:57:57.978110   29488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:57:57.992833   29488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I0425 18:57:57.993277   29488 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:57:57.993757   29488 main.go:141] libmachine: Using API Version  1
	I0425 18:57:57.993776   29488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:57:57.994058   29488 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:57:57.994228   29488 main.go:141] libmachine: (ha-912667-m04) Calling .DriverName
	I0425 18:57:57.994391   29488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:57:57.994409   29488 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHHostname
	I0425 18:57:57.996918   29488 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:57.997300   29488 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:57:57.997337   29488 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:57:57.997473   29488 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHPort
	I0425 18:57:57.997609   29488 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHKeyPath
	I0425 18:57:57.997781   29488 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHUsername
	I0425 18:57:57.997923   29488 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m04/id_rsa Username:docker}
	I0425 18:57:58.083127   29488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:57:58.099025   29488 status.go:257] ha-912667-m04 status: &{Name:ha-912667-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr: exit status 3 (4.36029987s)

                                                
                                                
-- stdout --
	ha-912667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-912667-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:58:00.287104   29605 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:58:00.287373   29605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:58:00.287382   29605 out.go:304] Setting ErrFile to fd 2...
	I0425 18:58:00.287386   29605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:58:00.287579   29605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:58:00.287728   29605 out.go:298] Setting JSON to false
	I0425 18:58:00.287754   29605 mustload.go:65] Loading cluster: ha-912667
	I0425 18:58:00.287888   29605 notify.go:220] Checking for updates...
	I0425 18:58:00.288250   29605 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:58:00.288268   29605 status.go:255] checking status of ha-912667 ...
	I0425 18:58:00.288691   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:00.288817   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:00.306125   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39601
	I0425 18:58:00.306537   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:00.307166   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:00.307196   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:00.307520   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:00.307756   29605 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:58:00.309428   29605 status.go:330] ha-912667 host status = "Running" (err=<nil>)
	I0425 18:58:00.309446   29605 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:58:00.309700   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:00.309739   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:00.325645   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38167
	I0425 18:58:00.326056   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:00.326537   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:00.326552   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:00.326883   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:00.327093   29605 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:58:00.330131   29605 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:00.330646   29605 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:58:00.330676   29605 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:00.330924   29605 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:58:00.331326   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:00.331381   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:00.347417   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33713
	I0425 18:58:00.347784   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:00.348173   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:00.348193   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:00.348534   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:00.348703   29605 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:58:00.348918   29605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:00.348938   29605 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:58:00.351891   29605 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:00.352372   29605 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:58:00.352404   29605 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:00.352517   29605 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:58:00.352675   29605 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:58:00.352822   29605 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:58:00.352965   29605 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:58:00.436919   29605 ssh_runner.go:195] Run: systemctl --version
	I0425 18:58:00.443469   29605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:00.459673   29605 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:58:00.459698   29605 api_server.go:166] Checking apiserver status ...
	I0425 18:58:00.459725   29605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:58:00.473758   29605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0425 18:58:00.484638   29605 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:58:00.484696   29605 ssh_runner.go:195] Run: ls
	I0425 18:58:00.490288   29605 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:58:00.498558   29605 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:58:00.498579   29605 status.go:422] ha-912667 apiserver status = Running (err=<nil>)
	I0425 18:58:00.498591   29605 status.go:257] ha-912667 status: &{Name:ha-912667 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:58:00.498611   29605 status.go:255] checking status of ha-912667-m02 ...
	I0425 18:58:00.498911   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:00.498943   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:00.514479   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35207
	I0425 18:58:00.514827   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:00.515264   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:00.515286   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:00.515632   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:00.515825   29605 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 18:58:00.517338   29605 status.go:330] ha-912667-m02 host status = "Running" (err=<nil>)
	I0425 18:58:00.517350   29605 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:58:00.517632   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:00.517672   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:00.531785   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0425 18:58:00.532222   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:00.532731   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:00.532755   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:00.533038   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:00.533199   29605 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:58:00.536013   29605 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:00.536454   29605 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:58:00.536479   29605 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:00.536628   29605 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:58:00.537021   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:00.537067   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:00.551362   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41065
	I0425 18:58:00.552947   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:00.553419   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:00.553452   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:00.553838   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:00.554073   29605 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:58:00.554288   29605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:00.554306   29605 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:58:00.556954   29605 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:00.557363   29605 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:58:00.557391   29605 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:00.557549   29605 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:58:00.557730   29605 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:58:00.557859   29605 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:58:00.557962   29605 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	W0425 18:58:00.810417   29605 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:58:00.810483   29605 retry.go:31] will retry after 348.857903ms: dial tcp 192.168.39.66:22: connect: no route to host
	W0425 18:58:04.234450   29605 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.66:22: connect: no route to host
	W0425 18:58:04.234522   29605 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	E0425 18:58:04.234545   29605 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:58:04.234552   29605 status.go:257] ha-912667-m02 status: &{Name:ha-912667-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 18:58:04.234570   29605 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:58:04.234578   29605 status.go:255] checking status of ha-912667-m03 ...
	I0425 18:58:04.234856   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:04.234920   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:04.251299   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36493
	I0425 18:58:04.251747   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:04.252215   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:04.252243   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:04.252572   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:04.252758   29605 main.go:141] libmachine: (ha-912667-m03) Calling .GetState
	I0425 18:58:04.254345   29605 status.go:330] ha-912667-m03 host status = "Running" (err=<nil>)
	I0425 18:58:04.254363   29605 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:58:04.254626   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:04.254658   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:04.268501   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39737
	I0425 18:58:04.268865   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:04.269268   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:04.269295   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:04.269592   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:04.269851   29605 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:58:04.272578   29605 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:04.272962   29605 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:58:04.272992   29605 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:04.273112   29605 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:58:04.273398   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:04.273448   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:04.287217   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
	I0425 18:58:04.287506   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:04.287889   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:04.287908   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:04.288223   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:04.288412   29605 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:58:04.288597   29605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:04.288636   29605 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:58:04.291340   29605 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:04.291768   29605 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:58:04.291794   29605 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:04.291922   29605 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:58:04.292095   29605 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:58:04.292244   29605 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:58:04.292380   29605 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:58:04.374629   29605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:04.394697   29605 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:58:04.394728   29605 api_server.go:166] Checking apiserver status ...
	I0425 18:58:04.394802   29605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:58:04.411303   29605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0425 18:58:04.423501   29605 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:58:04.423560   29605 ssh_runner.go:195] Run: ls
	I0425 18:58:04.429186   29605 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:58:04.433879   29605 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:58:04.433901   29605 status.go:422] ha-912667-m03 apiserver status = Running (err=<nil>)
	I0425 18:58:04.433913   29605 status.go:257] ha-912667-m03 status: &{Name:ha-912667-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:58:04.433932   29605 status.go:255] checking status of ha-912667-m04 ...
	I0425 18:58:04.434345   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:04.434404   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:04.450308   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0425 18:58:04.450724   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:04.451187   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:04.451216   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:04.451541   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:04.451728   29605 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 18:58:04.453402   29605 status.go:330] ha-912667-m04 host status = "Running" (err=<nil>)
	I0425 18:58:04.453418   29605 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:58:04.453667   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:04.453701   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:04.467757   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45815
	I0425 18:58:04.468124   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:04.468530   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:04.468555   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:04.468872   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:04.469080   29605 main.go:141] libmachine: (ha-912667-m04) Calling .GetIP
	I0425 18:58:04.472099   29605 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:04.472581   29605 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:58:04.472606   29605 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:04.472789   29605 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:58:04.473052   29605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:04.473089   29605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:04.488160   29605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41781
	I0425 18:58:04.488591   29605 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:04.489033   29605 main.go:141] libmachine: Using API Version  1
	I0425 18:58:04.489056   29605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:04.489348   29605 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:04.489536   29605 main.go:141] libmachine: (ha-912667-m04) Calling .DriverName
	I0425 18:58:04.489715   29605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:04.489733   29605 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHHostname
	I0425 18:58:04.492530   29605 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:04.492982   29605 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:58:04.493012   29605 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:04.493175   29605 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHPort
	I0425 18:58:04.493347   29605 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHKeyPath
	I0425 18:58:04.493496   29605 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHUsername
	I0425 18:58:04.493650   29605 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m04/id_rsa Username:docker}
	I0425 18:58:04.578680   29605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:04.593997   29605 status.go:257] ha-912667-m04 status: &{Name:ha-912667-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr: exit status 3 (4.03727867s)

                                                
                                                
-- stdout --
	ha-912667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-912667-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:58:06.995903   29706 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:58:06.996052   29706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:58:06.996065   29706 out.go:304] Setting ErrFile to fd 2...
	I0425 18:58:06.996070   29706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:58:06.996271   29706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:58:06.996466   29706 out.go:298] Setting JSON to false
	I0425 18:58:06.996490   29706 mustload.go:65] Loading cluster: ha-912667
	I0425 18:58:06.996537   29706 notify.go:220] Checking for updates...
	I0425 18:58:06.996838   29706 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:58:06.996852   29706 status.go:255] checking status of ha-912667 ...
	I0425 18:58:06.997254   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:06.997312   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:07.013591   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46553
	I0425 18:58:07.014115   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:07.014757   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:07.014791   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:07.015151   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:07.015340   29706 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:58:07.017060   29706 status.go:330] ha-912667 host status = "Running" (err=<nil>)
	I0425 18:58:07.017078   29706 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:58:07.017504   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:07.017543   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:07.031975   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39017
	I0425 18:58:07.032391   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:07.032955   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:07.032988   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:07.033328   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:07.033539   29706 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:58:07.036314   29706 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:07.036752   29706 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:58:07.036793   29706 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:07.036904   29706 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:58:07.037237   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:07.037272   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:07.051441   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0425 18:58:07.051915   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:07.052348   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:07.052369   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:07.052658   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:07.052785   29706 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:58:07.052959   29706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:07.052986   29706 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:58:07.055440   29706 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:07.055835   29706 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:58:07.055863   29706 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:07.056022   29706 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:58:07.056197   29706 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:58:07.056353   29706 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:58:07.056501   29706 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:58:07.139936   29706 ssh_runner.go:195] Run: systemctl --version
	I0425 18:58:07.147398   29706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:07.165868   29706 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:58:07.165897   29706 api_server.go:166] Checking apiserver status ...
	I0425 18:58:07.165953   29706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:58:07.187769   29706 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0425 18:58:07.199527   29706 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:58:07.199582   29706 ssh_runner.go:195] Run: ls
	I0425 18:58:07.204580   29706 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:58:07.210771   29706 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:58:07.210795   29706 status.go:422] ha-912667 apiserver status = Running (err=<nil>)
	I0425 18:58:07.210805   29706 status.go:257] ha-912667 status: &{Name:ha-912667 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:58:07.210820   29706 status.go:255] checking status of ha-912667-m02 ...
	I0425 18:58:07.211094   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:07.211126   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:07.226386   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44553
	I0425 18:58:07.226798   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:07.227249   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:07.227272   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:07.227549   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:07.227742   29706 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 18:58:07.229489   29706 status.go:330] ha-912667-m02 host status = "Running" (err=<nil>)
	I0425 18:58:07.229507   29706 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:58:07.229906   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:07.229951   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:07.246138   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44573
	I0425 18:58:07.246510   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:07.246986   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:07.247007   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:07.247356   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:07.247543   29706 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:58:07.250159   29706 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:07.250532   29706 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:58:07.250557   29706 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:07.250685   29706 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:58:07.251070   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:07.251134   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:07.264814   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43489
	I0425 18:58:07.265142   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:07.265617   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:07.265640   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:07.265979   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:07.266122   29706 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:58:07.266312   29706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:07.266336   29706 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:58:07.268938   29706 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:07.269389   29706 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:58:07.269695   29706 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:58:07.269785   29706 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:07.269897   29706 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:58:07.270013   29706 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:58:07.270099   29706 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	W0425 18:58:07.306369   29706 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:58:07.306437   29706 retry.go:31] will retry after 216.238498ms: dial tcp 192.168.39.66:22: connect: no route to host
	W0425 18:58:10.602463   29706 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.66:22: connect: no route to host
	W0425 18:58:10.602557   29706 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	E0425 18:58:10.602586   29706 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:58:10.602599   29706 status.go:257] ha-912667-m02 status: &{Name:ha-912667-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 18:58:10.602640   29706 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:58:10.602655   29706 status.go:255] checking status of ha-912667-m03 ...
	I0425 18:58:10.602959   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:10.603013   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:10.619368   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0425 18:58:10.619899   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:10.620412   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:10.620442   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:10.620769   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:10.620942   29706 main.go:141] libmachine: (ha-912667-m03) Calling .GetState
	I0425 18:58:10.622504   29706 status.go:330] ha-912667-m03 host status = "Running" (err=<nil>)
	I0425 18:58:10.622531   29706 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:58:10.622800   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:10.622833   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:10.638767   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0425 18:58:10.639225   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:10.639780   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:10.639811   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:10.640153   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:10.640344   29706 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:58:10.643070   29706 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:10.643494   29706 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:58:10.643524   29706 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:10.643686   29706 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:58:10.643966   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:10.644005   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:10.659026   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45303
	I0425 18:58:10.659454   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:10.659864   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:10.659883   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:10.660190   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:10.660345   29706 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:58:10.660516   29706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:10.660536   29706 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:58:10.663114   29706 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:10.663495   29706 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:58:10.663542   29706 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:10.663671   29706 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:58:10.663847   29706 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:58:10.663996   29706 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:58:10.664122   29706 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:58:10.752797   29706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:10.773710   29706 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:58:10.773751   29706 api_server.go:166] Checking apiserver status ...
	I0425 18:58:10.773819   29706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:58:10.791578   29706 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0425 18:58:10.803972   29706 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:58:10.804034   29706 ssh_runner.go:195] Run: ls
	I0425 18:58:10.809734   29706 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:58:10.815013   29706 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:58:10.815037   29706 status.go:422] ha-912667-m03 apiserver status = Running (err=<nil>)
	I0425 18:58:10.815045   29706 status.go:257] ha-912667-m03 status: &{Name:ha-912667-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:58:10.815059   29706 status.go:255] checking status of ha-912667-m04 ...
	I0425 18:58:10.815387   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:10.815421   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:10.831473   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0425 18:58:10.831983   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:10.832565   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:10.832588   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:10.832880   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:10.833043   29706 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 18:58:10.834589   29706 status.go:330] ha-912667-m04 host status = "Running" (err=<nil>)
	I0425 18:58:10.834608   29706 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:58:10.834863   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:10.834896   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:10.848729   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36307
	I0425 18:58:10.849030   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:10.849428   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:10.849451   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:10.849712   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:10.849892   29706 main.go:141] libmachine: (ha-912667-m04) Calling .GetIP
	I0425 18:58:10.852556   29706 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:10.852991   29706 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:58:10.853024   29706 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:10.853154   29706 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:58:10.853511   29706 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:10.853550   29706 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:10.867310   29706 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0425 18:58:10.867616   29706 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:10.867994   29706 main.go:141] libmachine: Using API Version  1
	I0425 18:58:10.868006   29706 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:10.868251   29706 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:10.868467   29706 main.go:141] libmachine: (ha-912667-m04) Calling .DriverName
	I0425 18:58:10.868652   29706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:10.868671   29706 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHHostname
	I0425 18:58:10.871288   29706 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:10.871682   29706 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:58:10.871705   29706 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:10.871907   29706 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHPort
	I0425 18:58:10.872069   29706 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHKeyPath
	I0425 18:58:10.872220   29706 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHUsername
	I0425 18:58:10.872330   29706 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m04/id_rsa Username:docker}
	I0425 18:58:10.959182   29706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:10.977483   29706 status.go:257] ha-912667-m04 status: &{Name:ha-912667-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr: exit status 3 (3.787253414s)

                                                
                                                
-- stdout --
	ha-912667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-912667-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:58:14.545771   29823 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:58:14.546020   29823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:58:14.546030   29823 out.go:304] Setting ErrFile to fd 2...
	I0425 18:58:14.546034   29823 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:58:14.546616   29823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:58:14.546917   29823 out.go:298] Setting JSON to false
	I0425 18:58:14.546945   29823 mustload.go:65] Loading cluster: ha-912667
	I0425 18:58:14.547309   29823 notify.go:220] Checking for updates...
	I0425 18:58:14.548005   29823 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:58:14.548037   29823 status.go:255] checking status of ha-912667 ...
	I0425 18:58:14.548574   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:14.548629   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:14.563942   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0425 18:58:14.564340   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:14.564949   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:14.564983   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:14.565396   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:14.565635   29823 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:58:14.567325   29823 status.go:330] ha-912667 host status = "Running" (err=<nil>)
	I0425 18:58:14.567339   29823 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:58:14.567681   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:14.567721   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:14.583292   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38345
	I0425 18:58:14.583697   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:14.584123   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:14.584144   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:14.584442   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:14.584615   29823 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:58:14.587399   29823 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:14.587886   29823 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:58:14.587921   29823 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:14.588038   29823 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:58:14.588322   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:14.588362   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:14.604100   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0425 18:58:14.604650   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:14.605299   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:14.605315   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:14.605584   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:14.605785   29823 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:58:14.606000   29823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:14.606039   29823 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:58:14.608907   29823 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:14.609424   29823 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:58:14.609462   29823 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:14.609637   29823 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:58:14.609812   29823 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:58:14.609951   29823 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:58:14.610083   29823 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:58:14.699520   29823 ssh_runner.go:195] Run: systemctl --version
	I0425 18:58:14.707704   29823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:14.726306   29823 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:58:14.726334   29823 api_server.go:166] Checking apiserver status ...
	I0425 18:58:14.726385   29823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:58:14.746182   29823 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0425 18:58:14.758044   29823 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:58:14.758093   29823 ssh_runner.go:195] Run: ls
	I0425 18:58:14.763681   29823 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:58:14.768301   29823 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:58:14.768325   29823 status.go:422] ha-912667 apiserver status = Running (err=<nil>)
	I0425 18:58:14.768334   29823 status.go:257] ha-912667 status: &{Name:ha-912667 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:58:14.768356   29823 status.go:255] checking status of ha-912667-m02 ...
	I0425 18:58:14.768615   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:14.768654   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:14.783393   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34989
	I0425 18:58:14.783840   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:14.784328   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:14.784347   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:14.784647   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:14.784808   29823 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 18:58:14.786506   29823 status.go:330] ha-912667-m02 host status = "Running" (err=<nil>)
	I0425 18:58:14.786524   29823 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:58:14.786844   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:14.786882   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:14.801875   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38497
	I0425 18:58:14.802271   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:14.802718   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:14.802743   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:14.803031   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:14.803205   29823 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:58:14.806394   29823 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:14.806896   29823 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:58:14.806923   29823 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:14.807077   29823 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 18:58:14.807372   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:14.807408   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:14.821539   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37045
	I0425 18:58:14.821950   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:14.822392   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:14.822414   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:14.822765   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:14.822971   29823 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:58:14.823143   29823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:14.823167   29823 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:58:14.825753   29823 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:14.826231   29823 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:58:14.826260   29823 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:58:14.826402   29823 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:58:14.826565   29823 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:58:14.826713   29823 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:58:14.826827   29823 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	W0425 18:58:17.898479   29823 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.66:22: connect: no route to host
	W0425 18:58:17.898587   29823 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	E0425 18:58:17.898613   29823 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:58:17.898623   29823 status.go:257] ha-912667-m02 status: &{Name:ha-912667-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0425 18:58:17.898644   29823 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.66:22: connect: no route to host
	I0425 18:58:17.898652   29823 status.go:255] checking status of ha-912667-m03 ...
	I0425 18:58:17.899114   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:17.899174   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:17.915319   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39171
	I0425 18:58:17.915791   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:17.916341   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:17.916362   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:17.916674   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:17.916831   29823 main.go:141] libmachine: (ha-912667-m03) Calling .GetState
	I0425 18:58:17.918437   29823 status.go:330] ha-912667-m03 host status = "Running" (err=<nil>)
	I0425 18:58:17.918454   29823 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:58:17.918759   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:17.918792   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:17.934024   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I0425 18:58:17.934418   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:17.934817   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:17.934839   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:17.935101   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:17.935316   29823 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:58:17.938144   29823 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:17.938593   29823 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:58:17.938623   29823 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:17.938776   29823 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:58:17.939063   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:17.939095   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:17.953719   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I0425 18:58:17.954097   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:17.954651   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:17.954677   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:17.955001   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:17.955179   29823 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:58:17.955354   29823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:17.955375   29823 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:58:17.958013   29823 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:17.958485   29823 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:58:17.958524   29823 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:17.958679   29823 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:58:17.958836   29823 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:58:17.958979   29823 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:58:17.959071   29823 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:58:18.052452   29823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:18.073213   29823 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:58:18.073241   29823 api_server.go:166] Checking apiserver status ...
	I0425 18:58:18.073271   29823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:58:18.089156   29823 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0425 18:58:18.101818   29823 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:58:18.101878   29823 ssh_runner.go:195] Run: ls
	I0425 18:58:18.107555   29823 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:58:18.112339   29823 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:58:18.112365   29823 status.go:422] ha-912667-m03 apiserver status = Running (err=<nil>)
	I0425 18:58:18.112374   29823 status.go:257] ha-912667-m03 status: &{Name:ha-912667-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:58:18.112387   29823 status.go:255] checking status of ha-912667-m04 ...
	I0425 18:58:18.112668   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:18.112702   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:18.127429   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0425 18:58:18.127816   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:18.128274   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:18.128306   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:18.128661   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:18.128874   29823 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 18:58:18.130745   29823 status.go:330] ha-912667-m04 host status = "Running" (err=<nil>)
	I0425 18:58:18.130763   29823 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:58:18.131115   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:18.131188   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:18.146196   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
	I0425 18:58:18.146600   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:18.147114   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:18.147135   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:18.147490   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:18.147682   29823 main.go:141] libmachine: (ha-912667-m04) Calling .GetIP
	I0425 18:58:18.150523   29823 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:18.150976   29823 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:58:18.151016   29823 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:18.151087   29823 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:58:18.151365   29823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:18.151404   29823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:18.166556   29823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0425 18:58:18.166976   29823 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:18.167419   29823 main.go:141] libmachine: Using API Version  1
	I0425 18:58:18.167442   29823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:18.167751   29823 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:18.167957   29823 main.go:141] libmachine: (ha-912667-m04) Calling .DriverName
	I0425 18:58:18.168136   29823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:18.168155   29823 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHHostname
	I0425 18:58:18.170706   29823 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:18.171089   29823 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:58:18.171108   29823 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:18.171214   29823 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHPort
	I0425 18:58:18.171376   29823 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHKeyPath
	I0425 18:58:18.171506   29823 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHUsername
	I0425 18:58:18.171652   29823 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m04/id_rsa Username:docker}
	I0425 18:58:18.259662   29823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:18.276862   29823 status.go:257] ha-912667-m04 status: &{Name:ha-912667-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr: exit status 7 (654.984012ms)

                                                
                                                
-- stdout --
	ha-912667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-912667-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:58:29.295823   29976 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:58:29.296068   29976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:58:29.296077   29976 out.go:304] Setting ErrFile to fd 2...
	I0425 18:58:29.296082   29976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:58:29.296256   29976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:58:29.296423   29976 out.go:298] Setting JSON to false
	I0425 18:58:29.296449   29976 mustload.go:65] Loading cluster: ha-912667
	I0425 18:58:29.296578   29976 notify.go:220] Checking for updates...
	I0425 18:58:29.296819   29976 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:58:29.296835   29976 status.go:255] checking status of ha-912667 ...
	I0425 18:58:29.297212   29976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:29.297285   29976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:29.317703   29976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34071
	I0425 18:58:29.318073   29976 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:29.318572   29976 main.go:141] libmachine: Using API Version  1
	I0425 18:58:29.318607   29976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:29.318992   29976 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:29.319184   29976 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:58:29.321031   29976 status.go:330] ha-912667 host status = "Running" (err=<nil>)
	I0425 18:58:29.321048   29976 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:58:29.321374   29976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:29.321413   29976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:29.336377   29976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0425 18:58:29.336710   29976 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:29.337108   29976 main.go:141] libmachine: Using API Version  1
	I0425 18:58:29.337138   29976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:29.337455   29976 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:29.337654   29976 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:58:29.340633   29976 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:29.341073   29976 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:58:29.341112   29976 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:29.341334   29976 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:58:29.341600   29976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:29.341630   29976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:29.356022   29976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0425 18:58:29.356439   29976 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:29.356878   29976 main.go:141] libmachine: Using API Version  1
	I0425 18:58:29.356899   29976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:29.357207   29976 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:29.357378   29976 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:58:29.357524   29976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:29.357545   29976 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:58:29.360056   29976 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:29.360450   29976 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:58:29.360483   29976 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:58:29.360582   29976 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:58:29.360761   29976 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:58:29.360915   29976 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:58:29.361054   29976 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:58:29.444921   29976 ssh_runner.go:195] Run: systemctl --version
	I0425 18:58:29.452499   29976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:29.469626   29976 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:58:29.469654   29976 api_server.go:166] Checking apiserver status ...
	I0425 18:58:29.469685   29976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:58:29.484919   29976 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0425 18:58:29.495285   29976 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:58:29.495329   29976 ssh_runner.go:195] Run: ls
	I0425 18:58:29.501456   29976 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:58:29.508543   29976 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:58:29.508566   29976 status.go:422] ha-912667 apiserver status = Running (err=<nil>)
	I0425 18:58:29.508575   29976 status.go:257] ha-912667 status: &{Name:ha-912667 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:58:29.508590   29976 status.go:255] checking status of ha-912667-m02 ...
	I0425 18:58:29.508875   29976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:29.508907   29976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:29.523777   29976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43337
	I0425 18:58:29.524143   29976 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:29.524615   29976 main.go:141] libmachine: Using API Version  1
	I0425 18:58:29.524635   29976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:29.524972   29976 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:29.525145   29976 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 18:58:29.526798   29976 status.go:330] ha-912667-m02 host status = "Stopped" (err=<nil>)
	I0425 18:58:29.526820   29976 status.go:343] host is not running, skipping remaining checks
	I0425 18:58:29.526828   29976 status.go:257] ha-912667-m02 status: &{Name:ha-912667-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:58:29.526847   29976 status.go:255] checking status of ha-912667-m03 ...
	I0425 18:58:29.527133   29976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:29.527169   29976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:29.541213   29976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44287
	I0425 18:58:29.541556   29976 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:29.541974   29976 main.go:141] libmachine: Using API Version  1
	I0425 18:58:29.541995   29976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:29.542315   29976 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:29.542529   29976 main.go:141] libmachine: (ha-912667-m03) Calling .GetState
	I0425 18:58:29.544262   29976 status.go:330] ha-912667-m03 host status = "Running" (err=<nil>)
	I0425 18:58:29.544276   29976 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:58:29.544635   29976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:29.544696   29976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:29.558571   29976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35783
	I0425 18:58:29.558984   29976 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:29.559434   29976 main.go:141] libmachine: Using API Version  1
	I0425 18:58:29.559455   29976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:29.559699   29976 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:29.559842   29976 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:58:29.562698   29976 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:29.563145   29976 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:58:29.563172   29976 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:29.563340   29976 host.go:66] Checking if "ha-912667-m03" exists ...
	I0425 18:58:29.563617   29976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:29.563655   29976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:29.577462   29976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35273
	I0425 18:58:29.577849   29976 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:29.578315   29976 main.go:141] libmachine: Using API Version  1
	I0425 18:58:29.578335   29976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:29.578701   29976 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:29.578910   29976 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:58:29.579085   29976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:29.579108   29976 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:58:29.581876   29976 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:29.582313   29976 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:58:29.582343   29976 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:29.582509   29976 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:58:29.582667   29976 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:58:29.582811   29976 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:58:29.582922   29976 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:58:29.672098   29976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:29.690120   29976 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 18:58:29.690154   29976 api_server.go:166] Checking apiserver status ...
	I0425 18:58:29.690200   29976 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:58:29.705233   29976 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup
	W0425 18:58:29.716208   29976 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1547/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 18:58:29.716260   29976 ssh_runner.go:195] Run: ls
	I0425 18:58:29.721406   29976 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 18:58:29.725763   29976 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 18:58:29.725790   29976 status.go:422] ha-912667-m03 apiserver status = Running (err=<nil>)
	I0425 18:58:29.725801   29976 status.go:257] ha-912667-m03 status: &{Name:ha-912667-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 18:58:29.725814   29976 status.go:255] checking status of ha-912667-m04 ...
	I0425 18:58:29.726090   29976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:29.726126   29976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:29.744671   29976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38461
	I0425 18:58:29.745051   29976 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:29.745575   29976 main.go:141] libmachine: Using API Version  1
	I0425 18:58:29.745598   29976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:29.745946   29976 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:29.746146   29976 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 18:58:29.748019   29976 status.go:330] ha-912667-m04 host status = "Running" (err=<nil>)
	I0425 18:58:29.748033   29976 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:58:29.748349   29976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:29.748399   29976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:29.763109   29976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37791
	I0425 18:58:29.763504   29976 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:29.763952   29976 main.go:141] libmachine: Using API Version  1
	I0425 18:58:29.763977   29976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:29.764258   29976 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:29.764437   29976 main.go:141] libmachine: (ha-912667-m04) Calling .GetIP
	I0425 18:58:29.767284   29976 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:29.767781   29976 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:58:29.767815   29976 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:29.767952   29976 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 18:58:29.768406   29976 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:29.768445   29976 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:29.782450   29976 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I0425 18:58:29.782825   29976 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:29.783253   29976 main.go:141] libmachine: Using API Version  1
	I0425 18:58:29.783272   29976 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:29.783576   29976 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:29.783763   29976 main.go:141] libmachine: (ha-912667-m04) Calling .DriverName
	I0425 18:58:29.783956   29976 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 18:58:29.783979   29976 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHHostname
	I0425 18:58:29.786862   29976 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:29.787401   29976 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:58:29.787430   29976 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:29.787669   29976 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHPort
	I0425 18:58:29.787841   29976 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHKeyPath
	I0425 18:58:29.787991   29976 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHUsername
	I0425 18:58:29.788132   29976 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m04/id_rsa Username:docker}
	I0425 18:58:29.874544   29976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:58:29.890134   29976 status.go:257] ha-912667-m04 status: &{Name:ha-912667-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-912667 -n ha-912667
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-912667 logs -n 25: (1.639433296s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667:/home/docker/cp-test_ha-912667-m03_ha-912667.txt                     |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667 sudo cat                                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m03_ha-912667.txt                               |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m02:/home/docker/cp-test_ha-912667-m03_ha-912667-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m02 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m03_ha-912667-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04:/home/docker/cp-test_ha-912667-m03_ha-912667-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m04 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m03_ha-912667-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp testdata/cp-test.txt                                              | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile60710412/001/cp-test_ha-912667-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667:/home/docker/cp-test_ha-912667-m04_ha-912667.txt                     |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667 sudo cat                                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667.txt                               |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m02:/home/docker/cp-test_ha-912667-m04_ha-912667-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m02 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03:/home/docker/cp-test_ha-912667-m04_ha-912667-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m03 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-912667 node stop m02 -v=7                                                   | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-912667 node start m02 -v=7                                                  | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:57 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 18:49:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 18:49:35.469800   24262 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:49:35.471114   24262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:49:35.471131   24262 out.go:304] Setting ErrFile to fd 2...
	I0425 18:49:35.471138   24262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:49:35.471361   24262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:49:35.471966   24262 out.go:298] Setting JSON to false
	I0425 18:49:35.472851   24262 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1921,"bootTime":1714069054,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 18:49:35.472907   24262 start.go:139] virtualization: kvm guest
	I0425 18:49:35.474690   24262 out.go:177] * [ha-912667] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 18:49:35.476293   24262 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 18:49:35.477409   24262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 18:49:35.476293   24262 notify.go:220] Checking for updates...
	I0425 18:49:35.479776   24262 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:49:35.481005   24262 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:49:35.482165   24262 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 18:49:35.483400   24262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 18:49:35.484732   24262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 18:49:35.518402   24262 out.go:177] * Using the kvm2 driver based on user configuration
	I0425 18:49:35.519738   24262 start.go:297] selected driver: kvm2
	I0425 18:49:35.519755   24262 start.go:901] validating driver "kvm2" against <nil>
	I0425 18:49:35.519768   24262 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 18:49:35.520503   24262 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:49:35.520593   24262 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 18:49:35.535933   24262 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 18:49:35.536000   24262 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 18:49:35.536268   24262 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 18:49:35.536333   24262 cni.go:84] Creating CNI manager for ""
	I0425 18:49:35.536349   24262 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0425 18:49:35.536356   24262 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0425 18:49:35.536451   24262 start.go:340] cluster config:
	{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0425 18:49:35.536583   24262 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:49:35.538666   24262 out.go:177] * Starting "ha-912667" primary control-plane node in "ha-912667" cluster
	I0425 18:49:35.539979   24262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:49:35.540029   24262 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 18:49:35.540041   24262 cache.go:56] Caching tarball of preloaded images
	I0425 18:49:35.540151   24262 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 18:49:35.540163   24262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 18:49:35.540499   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:49:35.540524   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json: {Name:mkaea86dc7c947902746e075d4b5d6d393bd8935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:49:35.540659   24262 start.go:360] acquireMachinesLock for ha-912667: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 18:49:35.540696   24262 start.go:364] duration metric: took 18.658µs to acquireMachinesLock for "ha-912667"
	I0425 18:49:35.540713   24262 start.go:93] Provisioning new machine with config: &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:49:35.540771   24262 start.go:125] createHost starting for "" (driver="kvm2")
	I0425 18:49:35.542390   24262 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0425 18:49:35.542512   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:49:35.542554   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:49:35.557109   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0425 18:49:35.557528   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:49:35.558113   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:49:35.558132   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:49:35.558453   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:49:35.558626   24262 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 18:49:35.558764   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:49:35.558892   24262 start.go:159] libmachine.API.Create for "ha-912667" (driver="kvm2")
	I0425 18:49:35.558954   24262 client.go:168] LocalClient.Create starting
	I0425 18:49:35.558992   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 18:49:35.559036   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:49:35.559057   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:49:35.559118   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 18:49:35.559142   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:49:35.559160   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:49:35.559183   24262 main.go:141] libmachine: Running pre-create checks...
	I0425 18:49:35.559195   24262 main.go:141] libmachine: (ha-912667) Calling .PreCreateCheck
	I0425 18:49:35.559546   24262 main.go:141] libmachine: (ha-912667) Calling .GetConfigRaw
	I0425 18:49:35.559939   24262 main.go:141] libmachine: Creating machine...
	I0425 18:49:35.559951   24262 main.go:141] libmachine: (ha-912667) Calling .Create
	I0425 18:49:35.560081   24262 main.go:141] libmachine: (ha-912667) Creating KVM machine...
	I0425 18:49:35.561210   24262 main.go:141] libmachine: (ha-912667) DBG | found existing default KVM network
	I0425 18:49:35.561889   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:35.561704   24285 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001125e0}
	I0425 18:49:35.561932   24262 main.go:141] libmachine: (ha-912667) DBG | created network xml: 
	I0425 18:49:35.561949   24262 main.go:141] libmachine: (ha-912667) DBG | <network>
	I0425 18:49:35.561960   24262 main.go:141] libmachine: (ha-912667) DBG |   <name>mk-ha-912667</name>
	I0425 18:49:35.561973   24262 main.go:141] libmachine: (ha-912667) DBG |   <dns enable='no'/>
	I0425 18:49:35.561982   24262 main.go:141] libmachine: (ha-912667) DBG |   
	I0425 18:49:35.561995   24262 main.go:141] libmachine: (ha-912667) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0425 18:49:35.562005   24262 main.go:141] libmachine: (ha-912667) DBG |     <dhcp>
	I0425 18:49:35.562031   24262 main.go:141] libmachine: (ha-912667) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0425 18:49:35.562049   24262 main.go:141] libmachine: (ha-912667) DBG |     </dhcp>
	I0425 18:49:35.562080   24262 main.go:141] libmachine: (ha-912667) DBG |   </ip>
	I0425 18:49:35.562102   24262 main.go:141] libmachine: (ha-912667) DBG |   
	I0425 18:49:35.562115   24262 main.go:141] libmachine: (ha-912667) DBG | </network>
	I0425 18:49:35.562125   24262 main.go:141] libmachine: (ha-912667) DBG | 
	I0425 18:49:35.567221   24262 main.go:141] libmachine: (ha-912667) DBG | trying to create private KVM network mk-ha-912667 192.168.39.0/24...
	I0425 18:49:35.630513   24262 main.go:141] libmachine: (ha-912667) DBG | private KVM network mk-ha-912667 192.168.39.0/24 created
	I0425 18:49:35.630541   24262 main.go:141] libmachine: (ha-912667) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667 ...
	I0425 18:49:35.630558   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:35.630503   24285 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:49:35.630574   24262 main.go:141] libmachine: (ha-912667) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 18:49:35.630637   24262 main.go:141] libmachine: (ha-912667) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 18:49:35.856167   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:35.856020   24285 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa...
	I0425 18:49:35.993843   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:35.993741   24285 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/ha-912667.rawdisk...
	I0425 18:49:35.993892   24262 main.go:141] libmachine: (ha-912667) DBG | Writing magic tar header
	I0425 18:49:35.993902   24262 main.go:141] libmachine: (ha-912667) DBG | Writing SSH key tar header
	I0425 18:49:35.993911   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:35.993856   24285 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667 ...
	I0425 18:49:35.993985   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667
	I0425 18:49:35.994012   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667 (perms=drwx------)
	I0425 18:49:35.994025   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 18:49:35.994041   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:49:35.994051   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 18:49:35.994060   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 18:49:35.994069   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home/jenkins
	I0425 18:49:35.994079   24262 main.go:141] libmachine: (ha-912667) DBG | Checking permissions on dir: /home
	I0425 18:49:35.994097   24262 main.go:141] libmachine: (ha-912667) DBG | Skipping /home - not owner
	I0425 18:49:35.994111   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 18:49:35.994129   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 18:49:35.994141   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 18:49:35.994153   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 18:49:35.994165   24262 main.go:141] libmachine: (ha-912667) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 18:49:35.994182   24262 main.go:141] libmachine: (ha-912667) Creating domain...
	I0425 18:49:35.995208   24262 main.go:141] libmachine: (ha-912667) define libvirt domain using xml: 
	I0425 18:49:35.995226   24262 main.go:141] libmachine: (ha-912667) <domain type='kvm'>
	I0425 18:49:35.995235   24262 main.go:141] libmachine: (ha-912667)   <name>ha-912667</name>
	I0425 18:49:35.995242   24262 main.go:141] libmachine: (ha-912667)   <memory unit='MiB'>2200</memory>
	I0425 18:49:35.995250   24262 main.go:141] libmachine: (ha-912667)   <vcpu>2</vcpu>
	I0425 18:49:35.995256   24262 main.go:141] libmachine: (ha-912667)   <features>
	I0425 18:49:35.995264   24262 main.go:141] libmachine: (ha-912667)     <acpi/>
	I0425 18:49:35.995270   24262 main.go:141] libmachine: (ha-912667)     <apic/>
	I0425 18:49:35.995275   24262 main.go:141] libmachine: (ha-912667)     <pae/>
	I0425 18:49:35.995280   24262 main.go:141] libmachine: (ha-912667)     
	I0425 18:49:35.995288   24262 main.go:141] libmachine: (ha-912667)   </features>
	I0425 18:49:35.995293   24262 main.go:141] libmachine: (ha-912667)   <cpu mode='host-passthrough'>
	I0425 18:49:35.995308   24262 main.go:141] libmachine: (ha-912667)   
	I0425 18:49:35.995325   24262 main.go:141] libmachine: (ha-912667)   </cpu>
	I0425 18:49:35.995334   24262 main.go:141] libmachine: (ha-912667)   <os>
	I0425 18:49:35.995344   24262 main.go:141] libmachine: (ha-912667)     <type>hvm</type>
	I0425 18:49:35.995362   24262 main.go:141] libmachine: (ha-912667)     <boot dev='cdrom'/>
	I0425 18:49:35.995369   24262 main.go:141] libmachine: (ha-912667)     <boot dev='hd'/>
	I0425 18:49:35.995379   24262 main.go:141] libmachine: (ha-912667)     <bootmenu enable='no'/>
	I0425 18:49:35.995396   24262 main.go:141] libmachine: (ha-912667)   </os>
	I0425 18:49:35.995416   24262 main.go:141] libmachine: (ha-912667)   <devices>
	I0425 18:49:35.995433   24262 main.go:141] libmachine: (ha-912667)     <disk type='file' device='cdrom'>
	I0425 18:49:35.995441   24262 main.go:141] libmachine: (ha-912667)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/boot2docker.iso'/>
	I0425 18:49:35.995449   24262 main.go:141] libmachine: (ha-912667)       <target dev='hdc' bus='scsi'/>
	I0425 18:49:35.995457   24262 main.go:141] libmachine: (ha-912667)       <readonly/>
	I0425 18:49:35.995463   24262 main.go:141] libmachine: (ha-912667)     </disk>
	I0425 18:49:35.995468   24262 main.go:141] libmachine: (ha-912667)     <disk type='file' device='disk'>
	I0425 18:49:35.995476   24262 main.go:141] libmachine: (ha-912667)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 18:49:35.995486   24262 main.go:141] libmachine: (ha-912667)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/ha-912667.rawdisk'/>
	I0425 18:49:35.995493   24262 main.go:141] libmachine: (ha-912667)       <target dev='hda' bus='virtio'/>
	I0425 18:49:35.995498   24262 main.go:141] libmachine: (ha-912667)     </disk>
	I0425 18:49:35.995505   24262 main.go:141] libmachine: (ha-912667)     <interface type='network'>
	I0425 18:49:35.995533   24262 main.go:141] libmachine: (ha-912667)       <source network='mk-ha-912667'/>
	I0425 18:49:35.995560   24262 main.go:141] libmachine: (ha-912667)       <model type='virtio'/>
	I0425 18:49:35.995575   24262 main.go:141] libmachine: (ha-912667)     </interface>
	I0425 18:49:35.995588   24262 main.go:141] libmachine: (ha-912667)     <interface type='network'>
	I0425 18:49:35.995602   24262 main.go:141] libmachine: (ha-912667)       <source network='default'/>
	I0425 18:49:35.995614   24262 main.go:141] libmachine: (ha-912667)       <model type='virtio'/>
	I0425 18:49:35.995628   24262 main.go:141] libmachine: (ha-912667)     </interface>
	I0425 18:49:35.995645   24262 main.go:141] libmachine: (ha-912667)     <serial type='pty'>
	I0425 18:49:35.995668   24262 main.go:141] libmachine: (ha-912667)       <target port='0'/>
	I0425 18:49:35.995680   24262 main.go:141] libmachine: (ha-912667)     </serial>
	I0425 18:49:35.995694   24262 main.go:141] libmachine: (ha-912667)     <console type='pty'>
	I0425 18:49:35.995706   24262 main.go:141] libmachine: (ha-912667)       <target type='serial' port='0'/>
	I0425 18:49:35.995723   24262 main.go:141] libmachine: (ha-912667)     </console>
	I0425 18:49:35.995741   24262 main.go:141] libmachine: (ha-912667)     <rng model='virtio'>
	I0425 18:49:35.995755   24262 main.go:141] libmachine: (ha-912667)       <backend model='random'>/dev/random</backend>
	I0425 18:49:35.995765   24262 main.go:141] libmachine: (ha-912667)     </rng>
	I0425 18:49:35.995777   24262 main.go:141] libmachine: (ha-912667)     
	I0425 18:49:35.995788   24262 main.go:141] libmachine: (ha-912667)     
	I0425 18:49:35.995801   24262 main.go:141] libmachine: (ha-912667)   </devices>
	I0425 18:49:35.995828   24262 main.go:141] libmachine: (ha-912667) </domain>
	I0425 18:49:35.995844   24262 main.go:141] libmachine: (ha-912667) 
	I0425 18:49:36.001722   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:d3:aa:e8 in network default
	I0425 18:49:36.002318   24262 main.go:141] libmachine: (ha-912667) Ensuring networks are active...
	I0425 18:49:36.002336   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:36.002959   24262 main.go:141] libmachine: (ha-912667) Ensuring network default is active
	I0425 18:49:36.003284   24262 main.go:141] libmachine: (ha-912667) Ensuring network mk-ha-912667 is active
	I0425 18:49:36.003742   24262 main.go:141] libmachine: (ha-912667) Getting domain xml...
	I0425 18:49:36.004540   24262 main.go:141] libmachine: (ha-912667) Creating domain...
	I0425 18:49:37.173393   24262 main.go:141] libmachine: (ha-912667) Waiting to get IP...
	I0425 18:49:37.174284   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:37.174672   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:37.174707   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:37.174646   24285 retry.go:31] will retry after 292.650601ms: waiting for machine to come up
	I0425 18:49:37.469205   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:37.469643   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:37.469668   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:37.469606   24285 retry.go:31] will retry after 373.276627ms: waiting for machine to come up
	I0425 18:49:37.844039   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:37.844434   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:37.844463   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:37.844403   24285 retry.go:31] will retry after 343.112246ms: waiting for machine to come up
	I0425 18:49:38.188940   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:38.189427   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:38.189458   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:38.189371   24285 retry.go:31] will retry after 489.386145ms: waiting for machine to come up
	I0425 18:49:38.679903   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:38.680379   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:38.680404   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:38.680331   24285 retry.go:31] will retry after 598.945496ms: waiting for machine to come up
	I0425 18:49:39.281509   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:39.282156   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:39.282185   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:39.282091   24285 retry.go:31] will retry after 639.572202ms: waiting for machine to come up
	I0425 18:49:39.922960   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:39.923304   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:39.923348   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:39.923279   24285 retry.go:31] will retry after 876.557847ms: waiting for machine to come up
	I0425 18:49:40.801689   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:40.802099   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:40.802125   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:40.802048   24285 retry.go:31] will retry after 1.040148124s: waiting for machine to come up
	I0425 18:49:41.844086   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:41.844488   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:41.844511   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:41.844457   24285 retry.go:31] will retry after 1.811704814s: waiting for machine to come up
	I0425 18:49:43.658521   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:43.658930   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:43.658974   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:43.658892   24285 retry.go:31] will retry after 2.216558346s: waiting for machine to come up
	I0425 18:49:45.877597   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:45.878014   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:45.878037   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:45.877971   24285 retry.go:31] will retry after 2.176487509s: waiting for machine to come up
	I0425 18:49:48.057321   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:48.057761   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:48.057782   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:48.057727   24285 retry.go:31] will retry after 3.000506427s: waiting for machine to come up
	I0425 18:49:51.059530   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:51.059895   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:51.059925   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:51.059865   24285 retry.go:31] will retry after 4.068045939s: waiting for machine to come up
	I0425 18:49:55.133027   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:55.133367   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find current IP address of domain ha-912667 in network mk-ha-912667
	I0425 18:49:55.133405   24262 main.go:141] libmachine: (ha-912667) DBG | I0425 18:49:55.133336   24285 retry.go:31] will retry after 4.1493096s: waiting for machine to come up
	I0425 18:49:59.286531   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.286979   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has current primary IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.286997   24262 main.go:141] libmachine: (ha-912667) Found IP for machine: 192.168.39.189
	I0425 18:49:59.287009   24262 main.go:141] libmachine: (ha-912667) Reserving static IP address...
	I0425 18:49:59.287351   24262 main.go:141] libmachine: (ha-912667) DBG | unable to find host DHCP lease matching {name: "ha-912667", mac: "52:54:00:f2:04:73", ip: "192.168.39.189"} in network mk-ha-912667
	I0425 18:49:59.357601   24262 main.go:141] libmachine: (ha-912667) DBG | Getting to WaitForSSH function...
	I0425 18:49:59.357637   24262 main.go:141] libmachine: (ha-912667) Reserved static IP address: 192.168.39.189
	I0425 18:49:59.357652   24262 main.go:141] libmachine: (ha-912667) Waiting for SSH to be available...
	I0425 18:49:59.359971   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.360382   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.360419   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.360569   24262 main.go:141] libmachine: (ha-912667) DBG | Using SSH client type: external
	I0425 18:49:59.360693   24262 main.go:141] libmachine: (ha-912667) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa (-rw-------)
	I0425 18:49:59.360740   24262 main.go:141] libmachine: (ha-912667) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 18:49:59.360760   24262 main.go:141] libmachine: (ha-912667) DBG | About to run SSH command:
	I0425 18:49:59.360782   24262 main.go:141] libmachine: (ha-912667) DBG | exit 0
	I0425 18:49:59.486690   24262 main.go:141] libmachine: (ha-912667) DBG | SSH cmd err, output: <nil>: 
	I0425 18:49:59.487035   24262 main.go:141] libmachine: (ha-912667) KVM machine creation complete!
	I0425 18:49:59.487328   24262 main.go:141] libmachine: (ha-912667) Calling .GetConfigRaw
	I0425 18:49:59.487862   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:49:59.488044   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:49:59.488201   24262 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 18:49:59.488215   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:49:59.489328   24262 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 18:49:59.489345   24262 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 18:49:59.489353   24262 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 18:49:59.489361   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:49:59.491781   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.492187   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.492209   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.492390   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:49:59.492569   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.492707   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.492898   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:49:59.493059   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:49:59.493269   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:49:59.493282   24262 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 18:49:59.597859   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:49:59.597881   24262 main.go:141] libmachine: Detecting the provisioner...
	I0425 18:49:59.597888   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:49:59.600514   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.601001   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.601024   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.601244   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:49:59.601430   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.601622   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.601749   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:49:59.601909   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:49:59.602101   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:49:59.602114   24262 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 18:49:59.707580   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 18:49:59.707693   24262 main.go:141] libmachine: found compatible host: buildroot
	I0425 18:49:59.707710   24262 main.go:141] libmachine: Provisioning with buildroot...
	I0425 18:49:59.707721   24262 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 18:49:59.707946   24262 buildroot.go:166] provisioning hostname "ha-912667"
	I0425 18:49:59.707968   24262 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 18:49:59.708146   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:49:59.710647   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.710956   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.710980   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.711109   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:49:59.711269   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.711438   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.711546   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:49:59.711691   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:49:59.711910   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:49:59.711925   24262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-912667 && echo "ha-912667" | sudo tee /etc/hostname
	I0425 18:49:59.828703   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-912667
	
	I0425 18:49:59.828734   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:49:59.831060   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.831343   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.831366   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.831508   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:49:59.831698   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.831855   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:49:59.831988   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:49:59.832154   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:49:59.832352   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:49:59.832371   24262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-912667' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-912667/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-912667' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 18:49:59.948805   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:49:59.948828   24262 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 18:49:59.948855   24262 buildroot.go:174] setting up certificates
	I0425 18:49:59.948868   24262 provision.go:84] configureAuth start
	I0425 18:49:59.948886   24262 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 18:49:59.949136   24262 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:49:59.951730   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.952034   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.952058   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.952239   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:49:59.954284   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.954602   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:49:59.954626   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:49:59.954721   24262 provision.go:143] copyHostCerts
	I0425 18:49:59.954748   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:49:59.954784   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 18:49:59.954793   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:49:59.954864   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 18:49:59.954948   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:49:59.954965   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 18:49:59.954971   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:49:59.954995   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 18:49:59.955045   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:49:59.955060   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 18:49:59.955067   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:49:59.955086   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 18:49:59.955147   24262 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.ha-912667 san=[127.0.0.1 192.168.39.189 ha-912667 localhost minikube]
	I0425 18:50:00.008083   24262 provision.go:177] copyRemoteCerts
	I0425 18:50:00.008153   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 18:50:00.008173   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.010697   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.011011   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.011037   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.011221   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.011406   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.011519   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.011653   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:00.093508   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 18:50:00.093584   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 18:50:00.122848   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 18:50:00.122936   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0425 18:50:00.148658   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 18:50:00.148732   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 18:50:00.176370   24262 provision.go:87] duration metric: took 227.48225ms to configureAuth
	I0425 18:50:00.176402   24262 buildroot.go:189] setting minikube options for container-runtime
	I0425 18:50:00.176571   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:50:00.176636   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.179236   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.179633   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.179663   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.179801   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.180003   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.180202   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.180346   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.180551   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:50:00.180731   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:50:00.180749   24262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 18:50:00.460168   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 18:50:00.460201   24262 main.go:141] libmachine: Checking connection to Docker...
	I0425 18:50:00.460211   24262 main.go:141] libmachine: (ha-912667) Calling .GetURL
	I0425 18:50:00.461407   24262 main.go:141] libmachine: (ha-912667) DBG | Using libvirt version 6000000
	I0425 18:50:00.463582   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.463894   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.463923   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.464068   24262 main.go:141] libmachine: Docker is up and running!
	I0425 18:50:00.464080   24262 main.go:141] libmachine: Reticulating splines...
	I0425 18:50:00.464086   24262 client.go:171] duration metric: took 24.905122677s to LocalClient.Create
	I0425 18:50:00.464104   24262 start.go:167] duration metric: took 24.905214044s to libmachine.API.Create "ha-912667"
	I0425 18:50:00.464114   24262 start.go:293] postStartSetup for "ha-912667" (driver="kvm2")
	I0425 18:50:00.464122   24262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 18:50:00.464136   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:00.464353   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 18:50:00.464378   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.466261   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.466584   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.466608   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.466746   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.466934   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.467082   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.467205   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:00.550088   24262 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 18:50:00.554948   24262 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 18:50:00.554981   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 18:50:00.555075   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 18:50:00.555159   24262 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 18:50:00.555170   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 18:50:00.555268   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 18:50:00.566291   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:50:00.592708   24262 start.go:296] duration metric: took 128.58284ms for postStartSetup
	I0425 18:50:00.592746   24262 main.go:141] libmachine: (ha-912667) Calling .GetConfigRaw
	I0425 18:50:00.593257   24262 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:50:00.595651   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.595948   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.595971   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.596220   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:50:00.596379   24262 start.go:128] duration metric: took 25.055600373s to createHost
	I0425 18:50:00.596401   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.598325   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.598586   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.598619   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.598758   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.598933   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.599086   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.599189   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.599306   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:50:00.599501   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 18:50:00.599527   24262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 18:50:00.707663   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714071000.689845653
	
	I0425 18:50:00.707685   24262 fix.go:216] guest clock: 1714071000.689845653
	I0425 18:50:00.707693   24262 fix.go:229] Guest: 2024-04-25 18:50:00.689845653 +0000 UTC Remote: 2024-04-25 18:50:00.596390759 +0000 UTC m=+25.171804641 (delta=93.454894ms)
	I0425 18:50:00.707725   24262 fix.go:200] guest clock delta is within tolerance: 93.454894ms
	I0425 18:50:00.707730   24262 start.go:83] releasing machines lock for "ha-912667", held for 25.167025439s
	I0425 18:50:00.707751   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:00.708001   24262 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:50:00.710414   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.710715   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.710760   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.710914   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:00.711428   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:00.711611   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:00.711706   24262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 18:50:00.711746   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.711782   24262 ssh_runner.go:195] Run: cat /version.json
	I0425 18:50:00.711808   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:00.714262   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.714601   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.714633   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.714661   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.714762   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.714916   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.714960   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:00.714982   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:00.715074   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.715179   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:00.715225   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:00.715303   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:00.715396   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:00.715518   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:00.792222   24262 ssh_runner.go:195] Run: systemctl --version
	I0425 18:50:00.817107   24262 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 18:50:00.978976   24262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 18:50:00.985481   24262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 18:50:00.985547   24262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 18:50:01.002497   24262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 18:50:01.002518   24262 start.go:494] detecting cgroup driver to use...
	I0425 18:50:01.002565   24262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 18:50:01.018272   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 18:50:01.032711   24262 docker.go:217] disabling cri-docker service (if available) ...
	I0425 18:50:01.032776   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 18:50:01.046860   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 18:50:01.060895   24262 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 18:50:01.180129   24262 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 18:50:01.352614   24262 docker.go:233] disabling docker service ...
	I0425 18:50:01.352697   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 18:50:01.369345   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 18:50:01.384253   24262 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 18:50:01.514717   24262 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 18:50:01.637248   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 18:50:01.652388   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 18:50:01.673257   24262 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 18:50:01.673329   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.685625   24262 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 18:50:01.685714   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.698390   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.710705   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.722948   24262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 18:50:01.735752   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.748133   24262 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.767545   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:50:01.780135   24262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 18:50:01.791443   24262 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 18:50:01.791500   24262 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 18:50:01.807418   24262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 18:50:01.819224   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:50:01.954389   24262 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 18:50:02.109149   24262 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 18:50:02.109219   24262 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 18:50:02.114436   24262 start.go:562] Will wait 60s for crictl version
	I0425 18:50:02.114482   24262 ssh_runner.go:195] Run: which crictl
	I0425 18:50:02.118484   24262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 18:50:02.160407   24262 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 18:50:02.160522   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:50:02.192176   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:50:02.225009   24262 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 18:50:02.226615   24262 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 18:50:02.228982   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:02.229338   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:02.229368   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:02.229652   24262 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 18:50:02.234282   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:50:02.249719   24262 kubeadm.go:877] updating cluster {Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 18:50:02.249826   24262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:50:02.249867   24262 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 18:50:02.286423   24262 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 18:50:02.286483   24262 ssh_runner.go:195] Run: which lz4
	I0425 18:50:02.290889   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0425 18:50:02.290983   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 18:50:02.295888   24262 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 18:50:02.295912   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 18:50:03.988836   24262 crio.go:462] duration metric: took 1.697878668s to copy over tarball
	I0425 18:50:03.988895   24262 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 18:50:06.456388   24262 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.4674596s)
	I0425 18:50:06.456425   24262 crio.go:469] duration metric: took 2.467561699s to extract the tarball
	I0425 18:50:06.456434   24262 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 18:50:06.495294   24262 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 18:50:06.547133   24262 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 18:50:06.547154   24262 cache_images.go:84] Images are preloaded, skipping loading
	I0425 18:50:06.547164   24262 kubeadm.go:928] updating node { 192.168.39.189 8443 v1.30.0 crio true true} ...
	I0425 18:50:06.547268   24262 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-912667 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 18:50:06.547359   24262 ssh_runner.go:195] Run: crio config
	I0425 18:50:06.593864   24262 cni.go:84] Creating CNI manager for ""
	I0425 18:50:06.593888   24262 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0425 18:50:06.593900   24262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 18:50:06.593930   24262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-912667 NodeName:ha-912667 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 18:50:06.594091   24262 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-912667"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.189
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 18:50:06.594120   24262 kube-vip.go:111] generating kube-vip config ...
	I0425 18:50:06.594167   24262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0425 18:50:06.616921   24262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0425 18:50:06.617049   24262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0425 18:50:06.617132   24262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 18:50:06.633591   24262 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 18:50:06.633648   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0425 18:50:06.644675   24262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0425 18:50:06.663438   24262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 18:50:06.681860   24262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0425 18:50:06.700503   24262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0425 18:50:06.719035   24262 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0425 18:50:06.723411   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:50:06.736636   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:50:06.881784   24262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:50:06.900951   24262 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667 for IP: 192.168.39.189
	I0425 18:50:06.900979   24262 certs.go:194] generating shared ca certs ...
	I0425 18:50:06.900999   24262 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:06.901213   24262 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 18:50:06.901275   24262 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 18:50:06.901296   24262 certs.go:256] generating profile certs ...
	I0425 18:50:06.901364   24262 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key
	I0425 18:50:06.901385   24262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt with IP's: []
	I0425 18:50:07.197964   24262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt ...
	I0425 18:50:07.197995   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt: {Name:mkc3ff1f172713a4c9e99916dbf5dd6d8bd441d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.198153   24262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key ...
	I0425 18:50:07.198164   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key: {Name:mkc518be03db694a05e374dc619217f41b49d35f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.198253   24262 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.11613977
	I0425 18:50:07.198267   24262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.11613977 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189 192.168.39.254]
	I0425 18:50:07.355394   24262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.11613977 ...
	I0425 18:50:07.355429   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.11613977: {Name:mk81b9c860a5f69befde658e1feebb2f32b35f6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.355573   24262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.11613977 ...
	I0425 18:50:07.355585   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.11613977: {Name:mke84934957246a63a3f2ef2d488b41d02efc4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.355650   24262 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.11613977 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt
	I0425 18:50:07.355721   24262 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.11613977 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key
	I0425 18:50:07.355771   24262 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key
	I0425 18:50:07.355785   24262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt with IP's: []
	I0425 18:50:07.433932   24262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt ...
	I0425 18:50:07.433962   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt: {Name:mk3a035fbc85b97c96ad782548ea30273a035173 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.434109   24262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key ...
	I0425 18:50:07.434119   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key: {Name:mk5185d04df7e21e25a0334444109356dcf25f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:07.434179   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 18:50:07.434201   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 18:50:07.434230   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 18:50:07.434240   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 18:50:07.434249   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 18:50:07.434265   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 18:50:07.434275   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 18:50:07.434284   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 18:50:07.434336   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 18:50:07.434374   24262 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 18:50:07.434382   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 18:50:07.434401   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 18:50:07.434422   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 18:50:07.434442   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 18:50:07.434478   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:50:07.434510   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:50:07.434523   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 18:50:07.434534   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 18:50:07.435103   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 18:50:07.471231   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 18:50:07.501869   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 18:50:07.532288   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 18:50:07.562851   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 18:50:07.592410   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 18:50:07.622943   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 18:50:07.657028   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0425 18:50:07.685926   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 18:50:07.721853   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 18:50:07.753558   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 18:50:07.781706   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 18:50:07.801530   24262 ssh_runner.go:195] Run: openssl version
	I0425 18:50:07.808002   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 18:50:07.820553   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:50:07.825983   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:50:07.826031   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:50:07.832602   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 18:50:07.845512   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 18:50:07.858541   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 18:50:07.864166   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 18:50:07.864244   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 18:50:07.871451   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 18:50:07.885597   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 18:50:07.898895   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 18:50:07.904401   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 18:50:07.904471   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 18:50:07.911312   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 18:50:07.923983   24262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 18:50:07.929153   24262 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 18:50:07.929237   24262 kubeadm.go:391] StartCluster: {Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:50:07.929313   24262 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 18:50:07.929374   24262 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 18:50:07.972658   24262 cri.go:89] found id: ""
	I0425 18:50:07.972745   24262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0425 18:50:07.983957   24262 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 18:50:07.995766   24262 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 18:50:08.007742   24262 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 18:50:08.007772   24262 kubeadm.go:156] found existing configuration files:
	
	I0425 18:50:08.007813   24262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 18:50:08.018888   24262 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 18:50:08.018948   24262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 18:50:08.030479   24262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 18:50:08.041039   24262 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 18:50:08.041109   24262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 18:50:08.052211   24262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 18:50:08.062770   24262 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 18:50:08.062883   24262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 18:50:08.073789   24262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 18:50:08.084165   24262 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 18:50:08.084235   24262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 18:50:08.095495   24262 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 18:50:08.219358   24262 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 18:50:08.219429   24262 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 18:50:08.354045   24262 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 18:50:08.354184   24262 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 18:50:08.354289   24262 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 18:50:08.627745   24262 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 18:50:08.807241   24262 out.go:204]   - Generating certificates and keys ...
	I0425 18:50:08.807347   24262 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 18:50:08.807427   24262 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 18:50:08.807491   24262 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0425 18:50:08.876352   24262 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0425 18:50:09.019219   24262 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0425 18:50:09.229578   24262 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0425 18:50:09.612187   24262 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0425 18:50:09.612367   24262 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-912667 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I0425 18:50:09.720142   24262 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0425 18:50:09.720471   24262 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-912667 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I0425 18:50:09.944095   24262 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0425 18:50:10.141302   24262 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0425 18:50:10.311087   24262 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0425 18:50:10.311154   24262 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 18:50:10.428002   24262 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 18:50:10.732361   24262 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 18:50:11.005871   24262 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 18:50:11.228112   24262 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 18:50:11.451352   24262 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 18:50:11.452350   24262 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 18:50:11.455653   24262 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 18:50:11.457640   24262 out.go:204]   - Booting up control plane ...
	I0425 18:50:11.457748   24262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 18:50:11.457840   24262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 18:50:11.457954   24262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 18:50:11.476021   24262 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 18:50:11.476125   24262 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 18:50:11.476210   24262 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 18:50:11.616297   24262 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 18:50:11.616387   24262 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 18:50:12.118062   24262 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.859749ms
	I0425 18:50:12.118201   24262 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 18:50:18.104952   24262 kubeadm.go:309] [api-check] The API server is healthy after 5.988219274s
	I0425 18:50:18.122983   24262 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 18:50:18.139515   24262 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 18:50:18.177717   24262 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 18:50:18.177976   24262 kubeadm.go:309] [mark-control-plane] Marking the node ha-912667 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 18:50:18.194139   24262 kubeadm.go:309] [bootstrap-token] Using token: oba30z.3wm2lnpm5w9re787
	I0425 18:50:18.195616   24262 out.go:204]   - Configuring RBAC rules ...
	I0425 18:50:18.195712   24262 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 18:50:18.200271   24262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 18:50:18.219703   24262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 18:50:18.223552   24262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 18:50:18.227647   24262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 18:50:18.231336   24262 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 18:50:18.513584   24262 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 18:50:18.960692   24262 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 18:50:19.512641   24262 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 18:50:19.513740   24262 kubeadm.go:309] 
	I0425 18:50:19.513824   24262 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 18:50:19.513833   24262 kubeadm.go:309] 
	I0425 18:50:19.513916   24262 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 18:50:19.513953   24262 kubeadm.go:309] 
	I0425 18:50:19.513992   24262 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 18:50:19.514083   24262 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 18:50:19.514170   24262 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 18:50:19.514187   24262 kubeadm.go:309] 
	I0425 18:50:19.514265   24262 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 18:50:19.514275   24262 kubeadm.go:309] 
	I0425 18:50:19.514329   24262 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 18:50:19.514340   24262 kubeadm.go:309] 
	I0425 18:50:19.514404   24262 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 18:50:19.514528   24262 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 18:50:19.514615   24262 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 18:50:19.514627   24262 kubeadm.go:309] 
	I0425 18:50:19.514747   24262 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 18:50:19.514870   24262 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 18:50:19.514881   24262 kubeadm.go:309] 
	I0425 18:50:19.514986   24262 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token oba30z.3wm2lnpm5w9re787 \
	I0425 18:50:19.515127   24262 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed \
	I0425 18:50:19.515158   24262 kubeadm.go:309] 	--control-plane 
	I0425 18:50:19.515168   24262 kubeadm.go:309] 
	I0425 18:50:19.515311   24262 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 18:50:19.515324   24262 kubeadm.go:309] 
	I0425 18:50:19.515438   24262 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token oba30z.3wm2lnpm5w9re787 \
	I0425 18:50:19.515578   24262 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed 
	I0425 18:50:19.516099   24262 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 18:50:19.516137   24262 cni.go:84] Creating CNI manager for ""
	I0425 18:50:19.516152   24262 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0425 18:50:19.518049   24262 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0425 18:50:19.519335   24262 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0425 18:50:19.527699   24262 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0425 18:50:19.527721   24262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0425 18:50:19.548772   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0425 18:50:19.991508   24262 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 18:50:19.991581   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:19.991610   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-912667 minikube.k8s.io/updated_at=2024_04_25T18_50_19_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=ha-912667 minikube.k8s.io/primary=true
	I0425 18:50:20.177477   24262 ops.go:34] apiserver oom_adj: -16
	I0425 18:50:20.177581   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:20.677787   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:21.178114   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:21.678570   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:22.178351   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:22.678534   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:23.178462   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:23.678536   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:24.177821   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:24.678601   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:25.178106   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:25.678571   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:26.178062   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:26.678017   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:27.177671   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:27.678349   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:28.177659   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:28.678268   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:29.178328   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:29.678429   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:30.177775   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:30.678518   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:31.177997   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:31.678560   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:32.177910   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 18:50:32.274129   24262 kubeadm.go:1107] duration metric: took 12.282619837s to wait for elevateKubeSystemPrivileges
	W0425 18:50:32.274167   24262 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 18:50:32.274174   24262 kubeadm.go:393] duration metric: took 24.34494449s to StartCluster
	I0425 18:50:32.274189   24262 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:32.274260   24262 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:50:32.274925   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:50:32.275140   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0425 18:50:32.275171   24262 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:50:32.275191   24262 start.go:240] waiting for startup goroutines ...
	I0425 18:50:32.275212   24262 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 18:50:32.275285   24262 addons.go:69] Setting storage-provisioner=true in profile "ha-912667"
	I0425 18:50:32.275298   24262 addons.go:69] Setting default-storageclass=true in profile "ha-912667"
	I0425 18:50:32.275316   24262 addons.go:234] Setting addon storage-provisioner=true in "ha-912667"
	I0425 18:50:32.275331   24262 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-912667"
	I0425 18:50:32.275343   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:50:32.275457   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:50:32.275754   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:32.275788   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:32.275808   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:32.275818   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:32.291077   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45781
	I0425 18:50:32.291104   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37037
	I0425 18:50:32.291597   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:32.291599   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:32.292141   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:32.292189   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:32.292297   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:32.292340   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:32.292508   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:32.292632   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:32.292681   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:50:32.293167   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:32.293213   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:32.294845   24262 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:50:32.295113   24262 kapi.go:59] client config for ha-912667: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt", KeyFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key", CAFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0425 18:50:32.295699   24262 cert_rotation.go:137] Starting client certificate rotation controller
	I0425 18:50:32.295824   24262 addons.go:234] Setting addon default-storageclass=true in "ha-912667"
	I0425 18:50:32.295865   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:50:32.296164   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:32.296202   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:32.307958   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I0425 18:50:32.308411   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:32.308899   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:32.308918   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:32.309254   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:32.309458   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:50:32.309940   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
	I0425 18:50:32.310308   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:32.310783   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:32.310808   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:32.311115   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:32.311249   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:32.313220   24262 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 18:50:32.311669   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:32.314378   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:32.314466   24262 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 18:50:32.314486   24262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 18:50:32.314502   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:32.317152   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:32.317594   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:32.317630   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:32.317726   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:32.317886   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:32.318037   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:32.318199   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:32.328812   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46151
	I0425 18:50:32.329168   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:32.329593   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:32.329615   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:32.329904   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:32.330052   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:50:32.331552   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:50:32.331784   24262 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 18:50:32.331797   24262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 18:50:32.331807   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:50:32.334376   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:32.334730   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:50:32.334754   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:50:32.334859   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:50:32.334993   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:50:32.335150   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:50:32.335261   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:50:32.475850   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0425 18:50:32.523374   24262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 18:50:32.535737   24262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 18:50:33.005661   24262 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0425 18:50:33.005783   24262 main.go:141] libmachine: Making call to close driver server
	I0425 18:50:33.005808   24262 main.go:141] libmachine: (ha-912667) Calling .Close
	I0425 18:50:33.006119   24262 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:50:33.006137   24262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:50:33.006145   24262 main.go:141] libmachine: Making call to close driver server
	I0425 18:50:33.006152   24262 main.go:141] libmachine: (ha-912667) Calling .Close
	I0425 18:50:33.006385   24262 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:50:33.006405   24262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:50:33.006514   24262 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0425 18:50:33.006521   24262 round_trippers.go:469] Request Headers:
	I0425 18:50:33.006545   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:50:33.006554   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:50:33.015352   24262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0425 18:50:33.015907   24262 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0425 18:50:33.015925   24262 round_trippers.go:469] Request Headers:
	I0425 18:50:33.015935   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:50:33.015943   24262 round_trippers.go:473]     Content-Type: application/json
	I0425 18:50:33.015947   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:50:33.018615   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:50:33.018824   24262 main.go:141] libmachine: Making call to close driver server
	I0425 18:50:33.018839   24262 main.go:141] libmachine: (ha-912667) Calling .Close
	I0425 18:50:33.019075   24262 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:50:33.019098   24262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:50:33.019111   24262 main.go:141] libmachine: (ha-912667) DBG | Closing plugin on server side
	I0425 18:50:33.368359   24262 main.go:141] libmachine: Making call to close driver server
	I0425 18:50:33.368387   24262 main.go:141] libmachine: (ha-912667) Calling .Close
	I0425 18:50:33.368681   24262 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:50:33.368696   24262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:50:33.368706   24262 main.go:141] libmachine: Making call to close driver server
	I0425 18:50:33.368737   24262 main.go:141] libmachine: (ha-912667) DBG | Closing plugin on server side
	I0425 18:50:33.368784   24262 main.go:141] libmachine: (ha-912667) Calling .Close
	I0425 18:50:33.369019   24262 main.go:141] libmachine: Successfully made call to close driver server
	I0425 18:50:33.369043   24262 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 18:50:33.369046   24262 main.go:141] libmachine: (ha-912667) DBG | Closing plugin on server side
	I0425 18:50:33.370964   24262 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0425 18:50:33.371954   24262 addons.go:505] duration metric: took 1.09675326s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0425 18:50:33.371989   24262 start.go:245] waiting for cluster config update ...
	I0425 18:50:33.372004   24262 start.go:254] writing updated cluster config ...
	I0425 18:50:33.373842   24262 out.go:177] 
	I0425 18:50:33.375782   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:50:33.375868   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:50:33.377584   24262 out.go:177] * Starting "ha-912667-m02" control-plane node in "ha-912667" cluster
	I0425 18:50:33.379119   24262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:50:33.379141   24262 cache.go:56] Caching tarball of preloaded images
	I0425 18:50:33.379250   24262 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 18:50:33.379270   24262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 18:50:33.379334   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:50:33.379483   24262 start.go:360] acquireMachinesLock for ha-912667-m02: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 18:50:33.379525   24262 start.go:364] duration metric: took 22.545µs to acquireMachinesLock for "ha-912667-m02"
	I0425 18:50:33.379541   24262 start.go:93] Provisioning new machine with config: &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:50:33.379637   24262 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0425 18:50:33.381229   24262 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0425 18:50:33.381301   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:50:33.381332   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:50:33.396569   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I0425 18:50:33.396990   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:50:33.397539   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:50:33.397565   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:50:33.397874   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:50:33.398090   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetMachineName
	I0425 18:50:33.398285   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:50:33.398469   24262 start.go:159] libmachine.API.Create for "ha-912667" (driver="kvm2")
	I0425 18:50:33.398502   24262 client.go:168] LocalClient.Create starting
	I0425 18:50:33.398540   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 18:50:33.398580   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:50:33.398600   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:50:33.398664   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 18:50:33.398712   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:50:33.398732   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:50:33.398767   24262 main.go:141] libmachine: Running pre-create checks...
	I0425 18:50:33.398778   24262 main.go:141] libmachine: (ha-912667-m02) Calling .PreCreateCheck
	I0425 18:50:33.398958   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetConfigRaw
	I0425 18:50:33.399414   24262 main.go:141] libmachine: Creating machine...
	I0425 18:50:33.399432   24262 main.go:141] libmachine: (ha-912667-m02) Calling .Create
	I0425 18:50:33.399550   24262 main.go:141] libmachine: (ha-912667-m02) Creating KVM machine...
	I0425 18:50:33.400783   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found existing default KVM network
	I0425 18:50:33.400926   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found existing private KVM network mk-ha-912667
	I0425 18:50:33.401066   24262 main.go:141] libmachine: (ha-912667-m02) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02 ...
	I0425 18:50:33.401086   24262 main.go:141] libmachine: (ha-912667-m02) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 18:50:33.401153   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:33.401064   24677 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:50:33.401270   24262 main.go:141] libmachine: (ha-912667-m02) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 18:50:33.624278   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:33.624161   24677 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa...
	I0425 18:50:33.767748   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:33.767636   24677 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/ha-912667-m02.rawdisk...
	I0425 18:50:33.767776   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Writing magic tar header
	I0425 18:50:33.767816   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Writing SSH key tar header
	I0425 18:50:33.767835   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:33.767749   24677 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02 ...
	I0425 18:50:33.767892   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02
	I0425 18:50:33.767922   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02 (perms=drwx------)
	I0425 18:50:33.767938   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 18:50:33.767957   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:50:33.767971   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 18:50:33.767986   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 18:50:33.767995   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home/jenkins
	I0425 18:50:33.768006   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Checking permissions on dir: /home
	I0425 18:50:33.768030   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 18:50:33.768043   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Skipping /home - not owner
	I0425 18:50:33.768056   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 18:50:33.768068   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 18:50:33.768082   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 18:50:33.768094   24262 main.go:141] libmachine: (ha-912667-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 18:50:33.768104   24262 main.go:141] libmachine: (ha-912667-m02) Creating domain...
	I0425 18:50:33.769015   24262 main.go:141] libmachine: (ha-912667-m02) define libvirt domain using xml: 
	I0425 18:50:33.769033   24262 main.go:141] libmachine: (ha-912667-m02) <domain type='kvm'>
	I0425 18:50:33.769059   24262 main.go:141] libmachine: (ha-912667-m02)   <name>ha-912667-m02</name>
	I0425 18:50:33.769081   24262 main.go:141] libmachine: (ha-912667-m02)   <memory unit='MiB'>2200</memory>
	I0425 18:50:33.769117   24262 main.go:141] libmachine: (ha-912667-m02)   <vcpu>2</vcpu>
	I0425 18:50:33.769140   24262 main.go:141] libmachine: (ha-912667-m02)   <features>
	I0425 18:50:33.769152   24262 main.go:141] libmachine: (ha-912667-m02)     <acpi/>
	I0425 18:50:33.769163   24262 main.go:141] libmachine: (ha-912667-m02)     <apic/>
	I0425 18:50:33.769193   24262 main.go:141] libmachine: (ha-912667-m02)     <pae/>
	I0425 18:50:33.769215   24262 main.go:141] libmachine: (ha-912667-m02)     
	I0425 18:50:33.769227   24262 main.go:141] libmachine: (ha-912667-m02)   </features>
	I0425 18:50:33.769238   24262 main.go:141] libmachine: (ha-912667-m02)   <cpu mode='host-passthrough'>
	I0425 18:50:33.769249   24262 main.go:141] libmachine: (ha-912667-m02)   
	I0425 18:50:33.769258   24262 main.go:141] libmachine: (ha-912667-m02)   </cpu>
	I0425 18:50:33.769272   24262 main.go:141] libmachine: (ha-912667-m02)   <os>
	I0425 18:50:33.769282   24262 main.go:141] libmachine: (ha-912667-m02)     <type>hvm</type>
	I0425 18:50:33.769291   24262 main.go:141] libmachine: (ha-912667-m02)     <boot dev='cdrom'/>
	I0425 18:50:33.769302   24262 main.go:141] libmachine: (ha-912667-m02)     <boot dev='hd'/>
	I0425 18:50:33.769312   24262 main.go:141] libmachine: (ha-912667-m02)     <bootmenu enable='no'/>
	I0425 18:50:33.769326   24262 main.go:141] libmachine: (ha-912667-m02)   </os>
	I0425 18:50:33.769338   24262 main.go:141] libmachine: (ha-912667-m02)   <devices>
	I0425 18:50:33.769347   24262 main.go:141] libmachine: (ha-912667-m02)     <disk type='file' device='cdrom'>
	I0425 18:50:33.769359   24262 main.go:141] libmachine: (ha-912667-m02)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/boot2docker.iso'/>
	I0425 18:50:33.769371   24262 main.go:141] libmachine: (ha-912667-m02)       <target dev='hdc' bus='scsi'/>
	I0425 18:50:33.769395   24262 main.go:141] libmachine: (ha-912667-m02)       <readonly/>
	I0425 18:50:33.769405   24262 main.go:141] libmachine: (ha-912667-m02)     </disk>
	I0425 18:50:33.769425   24262 main.go:141] libmachine: (ha-912667-m02)     <disk type='file' device='disk'>
	I0425 18:50:33.769446   24262 main.go:141] libmachine: (ha-912667-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 18:50:33.769471   24262 main.go:141] libmachine: (ha-912667-m02)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/ha-912667-m02.rawdisk'/>
	I0425 18:50:33.769490   24262 main.go:141] libmachine: (ha-912667-m02)       <target dev='hda' bus='virtio'/>
	I0425 18:50:33.769503   24262 main.go:141] libmachine: (ha-912667-m02)     </disk>
	I0425 18:50:33.769515   24262 main.go:141] libmachine: (ha-912667-m02)     <interface type='network'>
	I0425 18:50:33.769527   24262 main.go:141] libmachine: (ha-912667-m02)       <source network='mk-ha-912667'/>
	I0425 18:50:33.769536   24262 main.go:141] libmachine: (ha-912667-m02)       <model type='virtio'/>
	I0425 18:50:33.769548   24262 main.go:141] libmachine: (ha-912667-m02)     </interface>
	I0425 18:50:33.769565   24262 main.go:141] libmachine: (ha-912667-m02)     <interface type='network'>
	I0425 18:50:33.769578   24262 main.go:141] libmachine: (ha-912667-m02)       <source network='default'/>
	I0425 18:50:33.769589   24262 main.go:141] libmachine: (ha-912667-m02)       <model type='virtio'/>
	I0425 18:50:33.769601   24262 main.go:141] libmachine: (ha-912667-m02)     </interface>
	I0425 18:50:33.769610   24262 main.go:141] libmachine: (ha-912667-m02)     <serial type='pty'>
	I0425 18:50:33.769623   24262 main.go:141] libmachine: (ha-912667-m02)       <target port='0'/>
	I0425 18:50:33.769633   24262 main.go:141] libmachine: (ha-912667-m02)     </serial>
	I0425 18:50:33.769645   24262 main.go:141] libmachine: (ha-912667-m02)     <console type='pty'>
	I0425 18:50:33.769659   24262 main.go:141] libmachine: (ha-912667-m02)       <target type='serial' port='0'/>
	I0425 18:50:33.769670   24262 main.go:141] libmachine: (ha-912667-m02)     </console>
	I0425 18:50:33.769678   24262 main.go:141] libmachine: (ha-912667-m02)     <rng model='virtio'>
	I0425 18:50:33.769687   24262 main.go:141] libmachine: (ha-912667-m02)       <backend model='random'>/dev/random</backend>
	I0425 18:50:33.769697   24262 main.go:141] libmachine: (ha-912667-m02)     </rng>
	I0425 18:50:33.769706   24262 main.go:141] libmachine: (ha-912667-m02)     
	I0425 18:50:33.769715   24262 main.go:141] libmachine: (ha-912667-m02)     
	I0425 18:50:33.769727   24262 main.go:141] libmachine: (ha-912667-m02)   </devices>
	I0425 18:50:33.769741   24262 main.go:141] libmachine: (ha-912667-m02) </domain>
	I0425 18:50:33.769772   24262 main.go:141] libmachine: (ha-912667-m02) 
	I0425 18:50:33.776550   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:13:07:ce in network default
	I0425 18:50:33.777140   24262 main.go:141] libmachine: (ha-912667-m02) Ensuring networks are active...
	I0425 18:50:33.777162   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:33.777943   24262 main.go:141] libmachine: (ha-912667-m02) Ensuring network default is active
	I0425 18:50:33.778329   24262 main.go:141] libmachine: (ha-912667-m02) Ensuring network mk-ha-912667 is active
	I0425 18:50:33.778759   24262 main.go:141] libmachine: (ha-912667-m02) Getting domain xml...
	I0425 18:50:33.779585   24262 main.go:141] libmachine: (ha-912667-m02) Creating domain...
	I0425 18:50:35.015579   24262 main.go:141] libmachine: (ha-912667-m02) Waiting to get IP...
	I0425 18:50:35.016401   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:35.016845   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:35.016875   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:35.016821   24677 retry.go:31] will retry after 272.31751ms: waiting for machine to come up
	I0425 18:50:35.290272   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:35.290859   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:35.290889   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:35.290809   24677 retry.go:31] will retry after 355.818103ms: waiting for machine to come up
	I0425 18:50:35.648332   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:35.648726   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:35.648764   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:35.648674   24677 retry.go:31] will retry after 313.196477ms: waiting for machine to come up
	I0425 18:50:35.962837   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:35.963324   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:35.963354   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:35.963277   24677 retry.go:31] will retry after 447.300584ms: waiting for machine to come up
	I0425 18:50:36.411853   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:36.412326   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:36.412350   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:36.412288   24677 retry.go:31] will retry after 735.041089ms: waiting for machine to come up
	I0425 18:50:37.148697   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:37.149163   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:37.149207   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:37.149105   24677 retry.go:31] will retry after 790.482572ms: waiting for machine to come up
	I0425 18:50:37.940815   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:37.941179   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:37.941227   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:37.941138   24677 retry.go:31] will retry after 838.320133ms: waiting for machine to come up
	I0425 18:50:38.780783   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:38.781250   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:38.781276   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:38.781217   24677 retry.go:31] will retry after 1.393143408s: waiting for machine to come up
	I0425 18:50:40.176650   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:40.177058   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:40.177082   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:40.177019   24677 retry.go:31] will retry after 1.382169864s: waiting for machine to come up
	I0425 18:50:41.560741   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:41.561116   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:41.561162   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:41.561079   24677 retry.go:31] will retry after 1.653935327s: waiting for machine to come up
	I0425 18:50:43.216296   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:43.216713   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:43.216737   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:43.216679   24677 retry.go:31] will retry after 1.806231222s: waiting for machine to come up
	I0425 18:50:45.024850   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:45.025330   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:45.025378   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:45.025319   24677 retry.go:31] will retry after 3.576127864s: waiting for machine to come up
	I0425 18:50:48.603197   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:48.603520   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:48.603551   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:48.603473   24677 retry.go:31] will retry after 3.829916567s: waiting for machine to come up
	I0425 18:50:52.437454   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:52.437860   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find current IP address of domain ha-912667-m02 in network mk-ha-912667
	I0425 18:50:52.437890   24262 main.go:141] libmachine: (ha-912667-m02) DBG | I0425 18:50:52.437815   24677 retry.go:31] will retry after 4.932389568s: waiting for machine to come up
	I0425 18:50:57.371779   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:57.372290   24262 main.go:141] libmachine: (ha-912667-m02) Found IP for machine: 192.168.39.66
	I0425 18:50:57.372327   24262 main.go:141] libmachine: (ha-912667-m02) Reserving static IP address...
	I0425 18:50:57.372341   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has current primary IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:57.372644   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find host DHCP lease matching {name: "ha-912667-m02", mac: "52:54:00:5a:58:a0", ip: "192.168.39.66"} in network mk-ha-912667
	I0425 18:50:57.442440   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Getting to WaitForSSH function...
	I0425 18:50:57.442470   24262 main.go:141] libmachine: (ha-912667-m02) Reserved static IP address: 192.168.39.66
	I0425 18:50:57.442485   24262 main.go:141] libmachine: (ha-912667-m02) Waiting for SSH to be available...
	I0425 18:50:57.444830   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:50:57.445165   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667
	I0425 18:50:57.445197   24262 main.go:141] libmachine: (ha-912667-m02) DBG | unable to find defined IP address of network mk-ha-912667 interface with MAC address 52:54:00:5a:58:a0
	I0425 18:50:57.445339   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Using SSH client type: external
	I0425 18:50:57.445364   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa (-rw-------)
	I0425 18:50:57.445403   24262 main.go:141] libmachine: (ha-912667-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 18:50:57.445421   24262 main.go:141] libmachine: (ha-912667-m02) DBG | About to run SSH command:
	I0425 18:50:57.445448   24262 main.go:141] libmachine: (ha-912667-m02) DBG | exit 0
	I0425 18:50:57.448897   24262 main.go:141] libmachine: (ha-912667-m02) DBG | SSH cmd err, output: exit status 255: 
	I0425 18:50:57.448918   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0425 18:50:57.448934   24262 main.go:141] libmachine: (ha-912667-m02) DBG | command : exit 0
	I0425 18:50:57.448944   24262 main.go:141] libmachine: (ha-912667-m02) DBG | err     : exit status 255
	I0425 18:50:57.448958   24262 main.go:141] libmachine: (ha-912667-m02) DBG | output  : 
	I0425 18:51:00.449130   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Getting to WaitForSSH function...
	I0425 18:51:00.451492   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.451852   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:00.451879   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.452040   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Using SSH client type: external
	I0425 18:51:00.452066   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa (-rw-------)
	I0425 18:51:00.452099   24262 main.go:141] libmachine: (ha-912667-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 18:51:00.452116   24262 main.go:141] libmachine: (ha-912667-m02) DBG | About to run SSH command:
	I0425 18:51:00.452125   24262 main.go:141] libmachine: (ha-912667-m02) DBG | exit 0
	I0425 18:51:00.582574   24262 main.go:141] libmachine: (ha-912667-m02) DBG | SSH cmd err, output: <nil>: 
	I0425 18:51:00.582868   24262 main.go:141] libmachine: (ha-912667-m02) KVM machine creation complete!
	I0425 18:51:00.583228   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetConfigRaw
	I0425 18:51:00.583839   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:00.584002   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:00.584136   24262 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 18:51:00.584148   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 18:51:00.585297   24262 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 18:51:00.585311   24262 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 18:51:00.585317   24262 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 18:51:00.585324   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:00.587757   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.588116   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:00.588152   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.588285   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:00.588474   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.588663   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.588826   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:00.588976   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:00.589188   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:00.589203   24262 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 18:51:00.701950   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:51:00.701976   24262 main.go:141] libmachine: Detecting the provisioner...
	I0425 18:51:00.701985   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:00.704856   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.705163   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:00.705192   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.705338   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:00.705524   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.705719   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.705917   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:00.706078   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:00.706313   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:00.706329   24262 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 18:51:00.816075   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 18:51:00.816133   24262 main.go:141] libmachine: found compatible host: buildroot
	I0425 18:51:00.816140   24262 main.go:141] libmachine: Provisioning with buildroot...
	I0425 18:51:00.816147   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetMachineName
	I0425 18:51:00.816416   24262 buildroot.go:166] provisioning hostname "ha-912667-m02"
	I0425 18:51:00.816446   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetMachineName
	I0425 18:51:00.816639   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:00.819389   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.819767   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:00.819799   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.819979   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:00.820161   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.820323   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.820446   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:00.820601   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:00.820788   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:00.820801   24262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-912667-m02 && echo "ha-912667-m02" | sudo tee /etc/hostname
	I0425 18:51:00.951114   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-912667-m02
	
	I0425 18:51:00.951147   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:00.953844   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.954310   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:00.954338   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:00.954491   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:00.954667   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.954817   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:00.954923   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:00.955121   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:00.955274   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:00.955291   24262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-912667-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-912667-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-912667-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 18:51:01.076905   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:51:01.076933   24262 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 18:51:01.076949   24262 buildroot.go:174] setting up certificates
	I0425 18:51:01.076957   24262 provision.go:84] configureAuth start
	I0425 18:51:01.076965   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetMachineName
	I0425 18:51:01.077193   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:51:01.079866   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.080221   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.080248   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.080368   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.082445   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.082727   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.082759   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.082853   24262 provision.go:143] copyHostCerts
	I0425 18:51:01.082876   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:51:01.082911   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 18:51:01.082925   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:51:01.082987   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 18:51:01.083083   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:51:01.083104   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 18:51:01.083109   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:51:01.083133   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 18:51:01.083188   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:51:01.083204   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 18:51:01.083211   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:51:01.083231   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 18:51:01.083273   24262 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.ha-912667-m02 san=[127.0.0.1 192.168.39.66 ha-912667-m02 localhost minikube]
	I0425 18:51:01.174452   24262 provision.go:177] copyRemoteCerts
	I0425 18:51:01.174508   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 18:51:01.174533   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.177076   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.177364   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.177388   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.177531   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.177722   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.177881   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.177995   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	I0425 18:51:01.265418   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 18:51:01.265487   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0425 18:51:01.301867   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 18:51:01.301936   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 18:51:01.329938   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 18:51:01.330007   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 18:51:01.357857   24262 provision.go:87] duration metric: took 280.886715ms to configureAuth
	I0425 18:51:01.357890   24262 buildroot.go:189] setting minikube options for container-runtime
	I0425 18:51:01.358063   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:51:01.358152   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.360692   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.361069   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.361100   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.361283   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.361511   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.361697   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.361874   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.362046   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:01.362236   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:01.362253   24262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 18:51:01.652870   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 18:51:01.652908   24262 main.go:141] libmachine: Checking connection to Docker...
	I0425 18:51:01.652918   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetURL
	I0425 18:51:01.654109   24262 main.go:141] libmachine: (ha-912667-m02) DBG | Using libvirt version 6000000
	I0425 18:51:01.656105   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.656321   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.656342   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.656517   24262 main.go:141] libmachine: Docker is up and running!
	I0425 18:51:01.656529   24262 main.go:141] libmachine: Reticulating splines...
	I0425 18:51:01.656536   24262 client.go:171] duration metric: took 28.258024153s to LocalClient.Create
	I0425 18:51:01.656555   24262 start.go:167] duration metric: took 28.25808827s to libmachine.API.Create "ha-912667"
	I0425 18:51:01.656564   24262 start.go:293] postStartSetup for "ha-912667-m02" (driver="kvm2")
	I0425 18:51:01.656572   24262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 18:51:01.656589   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:01.656809   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 18:51:01.656830   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.658688   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.658975   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.659018   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.659091   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.659243   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.659380   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.659504   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	I0425 18:51:01.745446   24262 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 18:51:01.750300   24262 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 18:51:01.750323   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 18:51:01.750381   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 18:51:01.750445   24262 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 18:51:01.750457   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 18:51:01.750533   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 18:51:01.760679   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:51:01.788079   24262 start.go:296] duration metric: took 131.502365ms for postStartSetup
	I0425 18:51:01.788129   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetConfigRaw
	I0425 18:51:01.788753   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:51:01.791276   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.791619   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.791641   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.791921   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:51:01.792166   24262 start.go:128] duration metric: took 28.412517698s to createHost
	I0425 18:51:01.792190   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.794775   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.795128   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.795154   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.795356   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.795558   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.795702   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.795863   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.796007   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:51:01.796177   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.66 22 <nil> <nil>}
	I0425 18:51:01.796191   24262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 18:51:01.907571   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714071061.895155103
	
	I0425 18:51:01.907590   24262 fix.go:216] guest clock: 1714071061.895155103
	I0425 18:51:01.907596   24262 fix.go:229] Guest: 2024-04-25 18:51:01.895155103 +0000 UTC Remote: 2024-04-25 18:51:01.792180512 +0000 UTC m=+86.367594385 (delta=102.974591ms)
	I0425 18:51:01.907613   24262 fix.go:200] guest clock delta is within tolerance: 102.974591ms
	I0425 18:51:01.907620   24262 start.go:83] releasing machines lock for "ha-912667-m02", held for 28.528086055s
	I0425 18:51:01.907640   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:01.907925   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:51:01.910373   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.910676   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.910705   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.913168   24262 out.go:177] * Found network options:
	I0425 18:51:01.914669   24262 out.go:177]   - NO_PROXY=192.168.39.189
	W0425 18:51:01.915767   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 18:51:01.915815   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:01.916457   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:01.916686   24262 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 18:51:01.916774   24262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 18:51:01.916815   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	W0425 18:51:01.916848   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 18:51:01.916923   24262 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 18:51:01.916946   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 18:51:01.919610   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.919905   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.919988   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.920014   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.920133   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.920312   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:01.920336   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.920352   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:01.920477   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.920625   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 18:51:01.920681   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	I0425 18:51:01.920964   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 18:51:01.921126   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 18:51:01.921293   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	I0425 18:51:02.161922   24262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 18:51:02.168965   24262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 18:51:02.169031   24262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 18:51:02.187890   24262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 18:51:02.187925   24262 start.go:494] detecting cgroup driver to use...
	I0425 18:51:02.187998   24262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 18:51:02.205507   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 18:51:02.221282   24262 docker.go:217] disabling cri-docker service (if available) ...
	I0425 18:51:02.221340   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 18:51:02.239998   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 18:51:02.256143   24262 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 18:51:02.383796   24262 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 18:51:02.546378   24262 docker.go:233] disabling docker service ...
	I0425 18:51:02.546439   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 18:51:02.564419   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 18:51:02.580135   24262 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 18:51:02.732786   24262 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 18:51:02.858389   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 18:51:02.875385   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 18:51:02.897227   24262 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 18:51:02.897285   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.908319   24262 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 18:51:02.908366   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.920325   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.932150   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.944074   24262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 18:51:02.956417   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.968165   24262 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.988373   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:51:02.999369   24262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 18:51:03.008969   24262 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 18:51:03.009010   24262 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 18:51:03.023941   24262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 18:51:03.034370   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:51:03.166610   24262 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 18:51:03.319627   24262 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 18:51:03.319697   24262 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 18:51:03.324957   24262 start.go:562] Will wait 60s for crictl version
	I0425 18:51:03.325023   24262 ssh_runner.go:195] Run: which crictl
	I0425 18:51:03.329276   24262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 18:51:03.369309   24262 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 18:51:03.369393   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:51:03.402343   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:51:03.434551   24262 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 18:51:03.435880   24262 out.go:177]   - env NO_PROXY=192.168.39.189
	I0425 18:51:03.437106   24262 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 18:51:03.439538   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:03.439878   24262 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:50:49 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 18:51:03.439904   24262 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 18:51:03.440103   24262 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 18:51:03.444466   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:51:03.458794   24262 mustload.go:65] Loading cluster: ha-912667
	I0425 18:51:03.458962   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:51:03.459232   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:51:03.459264   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:51:03.474332   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0425 18:51:03.474706   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:51:03.475141   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:51:03.475159   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:51:03.475482   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:51:03.475659   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:51:03.476988   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:51:03.477290   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:51:03.477314   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:51:03.491072   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33319
	I0425 18:51:03.491489   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:51:03.491914   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:51:03.491934   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:51:03.492166   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:51:03.492290   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:51:03.492452   24262 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667 for IP: 192.168.39.66
	I0425 18:51:03.492465   24262 certs.go:194] generating shared ca certs ...
	I0425 18:51:03.492478   24262 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:51:03.492597   24262 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 18:51:03.492640   24262 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 18:51:03.492650   24262 certs.go:256] generating profile certs ...
	I0425 18:51:03.492734   24262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key
	I0425 18:51:03.492758   24262 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.cf2d0a5d
	I0425 18:51:03.492772   24262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.cf2d0a5d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189 192.168.39.66 192.168.39.254]
	I0425 18:51:03.953364   24262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.cf2d0a5d ...
	I0425 18:51:03.953396   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.cf2d0a5d: {Name:mk5137ba25a9fe77d3cb81ec7a2b2234f923a19a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:51:03.953559   24262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.cf2d0a5d ...
	I0425 18:51:03.953578   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.cf2d0a5d: {Name:mk91a6ad2b600314c57d75711856799b66f33329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:51:03.953650   24262 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.cf2d0a5d -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt
	I0425 18:51:03.953780   24262 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.cf2d0a5d -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key
	I0425 18:51:03.953903   24262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key
	I0425 18:51:03.953919   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 18:51:03.953932   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 18:51:03.953942   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 18:51:03.953952   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 18:51:03.953965   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 18:51:03.953975   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 18:51:03.953986   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 18:51:03.953997   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 18:51:03.954041   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 18:51:03.954070   24262 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 18:51:03.954082   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 18:51:03.954111   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 18:51:03.954138   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 18:51:03.954159   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 18:51:03.954194   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:51:03.954239   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:51:03.954254   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 18:51:03.954271   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 18:51:03.954301   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:51:03.957478   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:51:03.957903   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:51:03.957927   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:51:03.958104   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:51:03.958326   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:51:03.958527   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:51:03.958671   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:51:04.034625   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0425 18:51:04.040517   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0425 18:51:04.053076   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0425 18:51:04.058065   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0425 18:51:04.069603   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0425 18:51:04.074596   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0425 18:51:04.086275   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0425 18:51:04.091181   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0425 18:51:04.105497   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0425 18:51:04.110602   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0425 18:51:04.124115   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0425 18:51:04.130248   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0425 18:51:04.143925   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 18:51:04.173909   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 18:51:04.202128   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 18:51:04.230714   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 18:51:04.260359   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0425 18:51:04.288811   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 18:51:04.317244   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 18:51:04.345486   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0425 18:51:04.375240   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 18:51:04.406393   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 18:51:04.434825   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 18:51:04.461976   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0425 18:51:04.481575   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0425 18:51:04.507211   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0425 18:51:04.526733   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0425 18:51:04.545783   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0425 18:51:04.565083   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0425 18:51:04.584386   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0425 18:51:04.603416   24262 ssh_runner.go:195] Run: openssl version
	I0425 18:51:04.609572   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 18:51:04.625232   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:51:04.630553   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:51:04.630609   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:51:04.637817   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 18:51:04.651543   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 18:51:04.666365   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 18:51:04.671559   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 18:51:04.671632   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 18:51:04.678421   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 18:51:04.694278   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 18:51:04.709055   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 18:51:04.714405   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 18:51:04.714466   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 18:51:04.721051   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 18:51:04.734427   24262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 18:51:04.739445   24262 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 18:51:04.739509   24262 kubeadm.go:928] updating node {m02 192.168.39.66 8443 v1.30.0 crio true true} ...
	I0425 18:51:04.739598   24262 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-912667-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.66
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 18:51:04.739630   24262 kube-vip.go:111] generating kube-vip config ...
	I0425 18:51:04.739681   24262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0425 18:51:04.759989   24262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0425 18:51:04.760061   24262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0425 18:51:04.760110   24262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 18:51:04.772098   24262 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0425 18:51:04.772159   24262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0425 18:51:04.784264   24262 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0425 18:51:04.784280   24262 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0425 18:51:04.784293   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0425 18:51:04.784313   24262 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0425 18:51:04.784376   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0425 18:51:04.790145   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0425 18:51:04.790180   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0425 18:51:36.325627   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0425 18:51:36.325733   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0425 18:51:36.331308   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0425 18:51:36.331341   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0425 18:52:08.753573   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:52:08.772949   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0425 18:52:08.773028   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0425 18:52:08.779035   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0425 18:52:08.779072   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0425 18:52:09.276520   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0425 18:52:09.288087   24262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0425 18:52:09.310275   24262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 18:52:09.329583   24262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0425 18:52:09.348142   24262 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0425 18:52:09.352733   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:52:09.366360   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:52:09.488599   24262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:52:09.507460   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:52:09.507888   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:52:09.507930   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:52:09.523497   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0425 18:52:09.524076   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:52:09.524576   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:52:09.524613   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:52:09.524992   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:52:09.525213   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:52:09.525389   24262 start.go:316] joinCluster: &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:52:09.525500   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0425 18:52:09.525523   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:52:09.528382   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:52:09.528804   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:52:09.528845   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:52:09.528980   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:52:09.529134   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:52:09.529277   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:52:09.529398   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:52:09.695957   24262 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:52:09.696007   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hjy66t.mflcauxv23x5gsd7 --discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-912667-m02 --control-plane --apiserver-advertise-address=192.168.39.66 --apiserver-bind-port=8443"
	I0425 18:52:33.205575   24262 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hjy66t.mflcauxv23x5gsd7 --discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-912667-m02 --control-plane --apiserver-advertise-address=192.168.39.66 --apiserver-bind-port=8443": (23.509540296s)
	I0425 18:52:33.205620   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0425 18:52:33.729595   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-912667-m02 minikube.k8s.io/updated_at=2024_04_25T18_52_33_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=ha-912667 minikube.k8s.io/primary=false
	I0425 18:52:33.900493   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-912667-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0425 18:52:34.064880   24262 start.go:318] duration metric: took 24.539487846s to joinCluster
	I0425 18:52:34.064959   24262 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:52:34.066637   24262 out.go:177] * Verifying Kubernetes components...
	I0425 18:52:34.065259   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:52:34.068009   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:52:34.342188   24262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:52:34.371769   24262 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:52:34.372092   24262 kapi.go:59] client config for ha-912667: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt", KeyFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key", CAFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0425 18:52:34.372178   24262 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.189:8443
	I0425 18:52:34.372452   24262 node_ready.go:35] waiting up to 6m0s for node "ha-912667-m02" to be "Ready" ...
	I0425 18:52:34.372561   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:34.372572   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:34.372583   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:34.372588   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:34.384927   24262 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0425 18:52:34.873548   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:34.873570   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:34.873578   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:34.873583   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:34.882515   24262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0425 18:52:35.373637   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:35.373659   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:35.373670   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:35.373675   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:35.379726   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:52:35.873343   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:35.873365   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:35.873372   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:35.873376   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:35.877316   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:36.372657   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:36.372680   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:36.372689   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:36.372692   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:36.376310   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:36.377188   24262 node_ready.go:53] node "ha-912667-m02" has status "Ready":"False"
	I0425 18:52:36.872637   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:36.872662   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:36.872670   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:36.872675   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:36.876849   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:37.373610   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:37.373640   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:37.373654   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:37.373659   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:37.377391   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:37.873065   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:37.873087   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:37.873095   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:37.873100   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:37.879292   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:52:38.373547   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:38.373571   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:38.373579   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:38.373583   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:38.378058   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:38.378985   24262 node_ready.go:53] node "ha-912667-m02" has status "Ready":"False"
	I0425 18:52:38.873413   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:38.873436   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:38.873443   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:38.873447   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:38.876991   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:39.373427   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:39.373458   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:39.373469   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:39.373476   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:39.377810   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:39.873420   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:39.873443   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:39.873450   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:39.873455   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:39.877099   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:40.373137   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:40.373220   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:40.373235   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:40.373240   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:40.378474   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:40.379541   24262 node_ready.go:53] node "ha-912667-m02" has status "Ready":"False"
	I0425 18:52:40.873292   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:40.873319   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:40.873330   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:40.873338   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:40.884101   24262 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0425 18:52:41.373410   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:41.373434   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:41.373440   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:41.373444   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:41.379369   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:41.873602   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:41.873629   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:41.873637   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:41.873642   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:41.877191   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:42.373511   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:42.373547   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.373553   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.373559   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.377598   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:42.378463   24262 node_ready.go:49] node "ha-912667-m02" has status "Ready":"True"
	I0425 18:52:42.378481   24262 node_ready.go:38] duration metric: took 8.005998806s for node "ha-912667-m02" to be "Ready" ...
	I0425 18:52:42.378489   24262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:52:42.378545   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:52:42.378555   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.378562   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.378565   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.384147   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:42.391456   24262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.391554   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-22wvx
	I0425 18:52:42.391567   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.391578   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.391587   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.397170   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:42.397948   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:42.397969   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.397978   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.397987   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.401467   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:42.402146   24262 pod_ready.go:92] pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:42.402167   24262 pod_ready.go:81] duration metric: took 10.683846ms for pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.402179   24262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.402262   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-h4s2h
	I0425 18:52:42.402274   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.402284   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.402291   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.405106   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:42.406039   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:42.406053   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.406060   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.406065   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.408563   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:42.409259   24262 pod_ready.go:92] pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:42.409281   24262 pod_ready.go:81] duration metric: took 7.093835ms for pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.409294   24262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.409354   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667
	I0425 18:52:42.409365   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.409374   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.409386   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.412025   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:42.412614   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:42.412627   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.412634   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.412638   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.415013   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:42.415520   24262 pod_ready.go:92] pod "etcd-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:42.415538   24262 pod_ready.go:81] duration metric: took 6.235612ms for pod "etcd-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.415549   24262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:42.415609   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m02
	I0425 18:52:42.415620   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.415629   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.415639   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.418675   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:42.419657   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:42.419670   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.419680   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.419685   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.427005   24262 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0425 18:52:42.915899   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m02
	I0425 18:52:42.915921   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.915928   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.915933   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.919689   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:42.920328   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:42.920350   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:42.920374   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:42.920378   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:42.923472   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.416409   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m02
	I0425 18:52:43.416430   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.416437   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.416442   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.420157   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.421097   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:43.421117   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.421127   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.421132   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.424546   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.425483   24262 pod_ready.go:92] pod "etcd-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:43.425504   24262 pod_ready.go:81] duration metric: took 1.009946144s for pod "etcd-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:43.425524   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:43.425598   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667
	I0425 18:52:43.425609   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.425618   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.425627   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.428956   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.574106   24262 request.go:629] Waited for 143.841662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:43.574168   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:43.574173   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.574180   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.574184   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.578066   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.578761   24262 pod_ready.go:92] pod "kube-apiserver-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:43.578779   24262 pod_ready.go:81] duration metric: took 153.248043ms for pod "kube-apiserver-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:43.578792   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:43.774216   24262 request.go:629] Waited for 195.339462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:43.774267   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:43.774272   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.774279   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.774283   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.778170   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:43.974360   24262 request.go:629] Waited for 195.375592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:43.974425   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:43.974432   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:43.974442   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:43.974447   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:43.978839   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:44.173816   24262 request.go:629] Waited for 94.267791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:44.173896   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:44.173908   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:44.173918   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:44.173926   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:44.178191   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:44.374450   24262 request.go:629] Waited for 195.373961ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:44.374529   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:44.374534   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:44.374541   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:44.374544   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:44.378227   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:44.579975   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:44.580000   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:44.580013   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:44.580018   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:44.584635   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:44.774535   24262 request.go:629] Waited for 188.388488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:44.774638   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:44.774652   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:44.774661   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:44.774674   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:44.778025   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:45.079676   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:45.079699   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:45.079706   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:45.079709   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:45.083902   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:45.174281   24262 request.go:629] Waited for 89.281344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:45.174348   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:45.174354   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:45.174361   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:45.174364   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:45.178008   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:45.579329   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:45.579357   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:45.579365   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:45.579368   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:45.583223   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:45.584345   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:45.584362   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:45.584369   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:45.584375   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:45.587651   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:45.588479   24262 pod_ready.go:102] pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace has status "Ready":"False"
	I0425 18:52:46.079707   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:46.079728   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:46.079735   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:46.079738   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:46.083723   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:46.084499   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:46.084516   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:46.084532   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:46.084540   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:46.087461   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:46.579409   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:46.579437   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:46.579446   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:46.579452   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:46.583019   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:46.583880   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:46.583899   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:46.583906   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:46.583910   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:46.586780   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:47.080000   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:47.080028   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:47.080036   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:47.080040   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:47.085744   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:47.087650   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:47.087671   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:47.087682   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:47.087687   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:47.091461   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:47.579653   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:47.579679   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:47.579690   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:47.579695   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:47.583170   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:47.584026   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:47.584040   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:47.584047   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:47.584051   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:47.587441   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:48.079499   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:52:48.079526   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.079537   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.079545   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.083310   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:48.084241   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:48.084259   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.084269   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.084274   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.087226   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:48.087986   24262 pod_ready.go:92] pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:48.088010   24262 pod_ready.go:81] duration metric: took 4.509210477s for pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:48.088023   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:48.088094   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667
	I0425 18:52:48.088106   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.088114   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.088118   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.090857   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:52:48.091736   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:48.091756   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.091763   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.091767   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.094847   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:48.095386   24262 pod_ready.go:92] pod "kube-controller-manager-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:48.095405   24262 pod_ready.go:81] duration metric: took 7.373536ms for pod "kube-controller-manager-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:48.095414   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:48.173634   24262 request.go:629] Waited for 78.161409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:52:48.173722   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:52:48.173739   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.173748   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.173755   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.177915   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:52:48.374085   24262 request.go:629] Waited for 195.377261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:48.374141   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:48.374145   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.374153   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.374162   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.378110   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:48.596548   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:52:48.596568   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.596576   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.596581   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.602761   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:52:48.774030   24262 request.go:629] Waited for 170.345255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:48.774080   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:48.774086   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:48.774093   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:48.774097   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:48.777879   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.095701   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:52:49.095738   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.095748   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.095753   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.107464   24262 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0425 18:52:49.174567   24262 request.go:629] Waited for 66.224162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:49.174631   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:49.174646   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.174657   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.174664   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.178072   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.178831   24262 pod_ready.go:92] pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:49.178851   24262 pod_ready.go:81] duration metric: took 1.083431205s for pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:49.178861   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mkgv5" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:49.374276   24262 request.go:629] Waited for 195.361619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkgv5
	I0425 18:52:49.374358   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkgv5
	I0425 18:52:49.374366   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.374373   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.374377   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.377845   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.573854   24262 request.go:629] Waited for 195.222888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:49.573906   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:49.573911   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.573919   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.573923   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.577462   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.578290   24262 pod_ready.go:92] pod "kube-proxy-mkgv5" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:49.578310   24262 pod_ready.go:81] duration metric: took 399.443842ms for pod "kube-proxy-mkgv5" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:49.578326   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rkbcp" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:49.774283   24262 request.go:629] Waited for 195.902176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rkbcp
	I0425 18:52:49.774337   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rkbcp
	I0425 18:52:49.774342   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.774352   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.774402   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.778081   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.974476   24262 request.go:629] Waited for 195.376224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:49.974539   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:49.974544   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:49.974556   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:49.974561   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:49.977955   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:49.978796   24262 pod_ready.go:92] pod "kube-proxy-rkbcp" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:49.978819   24262 pod_ready.go:81] duration metric: took 400.485794ms for pod "kube-proxy-rkbcp" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:49.978832   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:50.173957   24262 request.go:629] Waited for 195.06393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667
	I0425 18:52:50.174048   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667
	I0425 18:52:50.174062   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.174072   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.174077   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.177434   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:50.373598   24262 request.go:629] Waited for 195.337793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:50.373671   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:52:50.373676   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.373682   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.373685   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.377154   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:50.378023   24262 pod_ready.go:92] pod "kube-scheduler-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:50.378045   24262 pod_ready.go:81] duration metric: took 399.203687ms for pod "kube-scheduler-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:50.378059   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:50.574460   24262 request.go:629] Waited for 196.320169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m02
	I0425 18:52:50.574518   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m02
	I0425 18:52:50.574526   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.574535   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.574542   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.580026   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:52:50.774247   24262 request.go:629] Waited for 193.363837ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:50.774299   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:52:50.774305   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.774312   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.774315   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.778005   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:50.778960   24262 pod_ready.go:92] pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:52:50.778985   24262 pod_ready.go:81] duration metric: took 400.916758ms for pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:52:50.778999   24262 pod_ready.go:38] duration metric: took 8.400497325s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:52:50.779017   24262 api_server.go:52] waiting for apiserver process to appear ...
	I0425 18:52:50.779077   24262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:52:50.794983   24262 api_server.go:72] duration metric: took 16.729987351s to wait for apiserver process to appear ...
	I0425 18:52:50.795010   24262 api_server.go:88] waiting for apiserver healthz status ...
	I0425 18:52:50.795032   24262 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I0425 18:52:50.799683   24262 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I0425 18:52:50.799759   24262 round_trippers.go:463] GET https://192.168.39.189:8443/version
	I0425 18:52:50.799769   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.799776   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.799780   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.800649   24262 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0425 18:52:50.800727   24262 api_server.go:141] control plane version: v1.30.0
	I0425 18:52:50.800743   24262 api_server.go:131] duration metric: took 5.726686ms to wait for apiserver health ...
	I0425 18:52:50.800749   24262 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 18:52:50.974161   24262 request.go:629] Waited for 173.32943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:52:50.974234   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:52:50.974242   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:50.974252   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:50.974261   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:50.980896   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:52:50.987446   24262 system_pods.go:59] 17 kube-system pods found
	I0425 18:52:50.987476   24262 system_pods.go:61] "coredns-7db6d8ff4d-22wvx" [56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e] Running
	I0425 18:52:50.987482   24262 system_pods.go:61] "coredns-7db6d8ff4d-h4s2h" [f9e2233c-5350-47ab-bdae-6fa35972b601] Running
	I0425 18:52:50.987486   24262 system_pods.go:61] "etcd-ha-912667" [d18fe5ec-655e-4da4-b8de-782eef846d55] Running
	I0425 18:52:50.987489   24262 system_pods.go:61] "etcd-ha-912667-m02" [8d6782f6-b00b-4d10-8a3a-452460974164] Running
	I0425 18:52:50.987492   24262 system_pods.go:61] "kindnet-sq4lb" [049d5dc9-13ec-4135-8785-229071e57d1a] Running
	I0425 18:52:50.987495   24262 system_pods.go:61] "kindnet-xlvjt" [191ff28e-07d7-459e-afe5-e3d8c23e1016] Running
	I0425 18:52:50.987498   24262 system_pods.go:61] "kube-apiserver-ha-912667" [a8339e9c-d67f-4e84-ba79-754ad86fdf82] Running
	I0425 18:52:50.987501   24262 system_pods.go:61] "kube-apiserver-ha-912667-m02" [a420b2a1-207a-435f-98d2-893836a60e78] Running
	I0425 18:52:50.987508   24262 system_pods.go:61] "kube-controller-manager-ha-912667" [6a91aebd-e142-4165-8acb-cc4c49a5df54] Running
	I0425 18:52:50.987511   24262 system_pods.go:61] "kube-controller-manager-ha-912667-m02" [e94e1a60-af79-4a8e-ac11-e7d36c3d68a3] Running
	I0425 18:52:50.987514   24262 system_pods.go:61] "kube-proxy-mkgv5" [7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a] Running
	I0425 18:52:50.987517   24262 system_pods.go:61] "kube-proxy-rkbcp" [c62d3486-15d6-4398-a397-2f542d8fb074] Running
	I0425 18:52:50.987523   24262 system_pods.go:61] "kube-scheduler-ha-912667" [7dc33762-4bee-467e-9db4-d783ffe04992] Running
	I0425 18:52:50.987526   24262 system_pods.go:61] "kube-scheduler-ha-912667-m02" [d2ab7cf9-3cd9-4b0b-aec1-26aee5cf3b2a] Running
	I0425 18:52:50.987528   24262 system_pods.go:61] "kube-vip-ha-912667" [bd3267a7-206d-4e47-b154-a7f17a492684] Running
	I0425 18:52:50.987532   24262 system_pods.go:61] "kube-vip-ha-912667-m02" [c0622f7e-0264-4168-b510-7563083cc9d3] Running
	I0425 18:52:50.987536   24262 system_pods.go:61] "storage-provisioner" [f3a0b111-609d-49b3-a056-71eb4b641224] Running
	I0425 18:52:50.987541   24262 system_pods.go:74] duration metric: took 186.787283ms to wait for pod list to return data ...
	I0425 18:52:50.987552   24262 default_sa.go:34] waiting for default service account to be created ...
	I0425 18:52:51.173970   24262 request.go:629] Waited for 186.329986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/default/serviceaccounts
	I0425 18:52:51.174022   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/default/serviceaccounts
	I0425 18:52:51.174027   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:51.174034   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:51.174038   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:51.178033   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:52:51.178307   24262 default_sa.go:45] found service account: "default"
	I0425 18:52:51.178328   24262 default_sa.go:55] duration metric: took 190.770193ms for default service account to be created ...
	I0425 18:52:51.178340   24262 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 18:52:51.373697   24262 request.go:629] Waited for 195.296743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:52:51.373783   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:52:51.373791   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:51.373798   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:51.373809   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:51.381703   24262 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0425 18:52:51.386543   24262 system_pods.go:86] 17 kube-system pods found
	I0425 18:52:51.386578   24262 system_pods.go:89] "coredns-7db6d8ff4d-22wvx" [56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e] Running
	I0425 18:52:51.386585   24262 system_pods.go:89] "coredns-7db6d8ff4d-h4s2h" [f9e2233c-5350-47ab-bdae-6fa35972b601] Running
	I0425 18:52:51.386591   24262 system_pods.go:89] "etcd-ha-912667" [d18fe5ec-655e-4da4-b8de-782eef846d55] Running
	I0425 18:52:51.386597   24262 system_pods.go:89] "etcd-ha-912667-m02" [8d6782f6-b00b-4d10-8a3a-452460974164] Running
	I0425 18:52:51.386602   24262 system_pods.go:89] "kindnet-sq4lb" [049d5dc9-13ec-4135-8785-229071e57d1a] Running
	I0425 18:52:51.386609   24262 system_pods.go:89] "kindnet-xlvjt" [191ff28e-07d7-459e-afe5-e3d8c23e1016] Running
	I0425 18:52:51.386617   24262 system_pods.go:89] "kube-apiserver-ha-912667" [a8339e9c-d67f-4e84-ba79-754ad86fdf82] Running
	I0425 18:52:51.386624   24262 system_pods.go:89] "kube-apiserver-ha-912667-m02" [a420b2a1-207a-435f-98d2-893836a60e78] Running
	I0425 18:52:51.386634   24262 system_pods.go:89] "kube-controller-manager-ha-912667" [6a91aebd-e142-4165-8acb-cc4c49a5df54] Running
	I0425 18:52:51.386641   24262 system_pods.go:89] "kube-controller-manager-ha-912667-m02" [e94e1a60-af79-4a8e-ac11-e7d36c3d68a3] Running
	I0425 18:52:51.386651   24262 system_pods.go:89] "kube-proxy-mkgv5" [7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a] Running
	I0425 18:52:51.386658   24262 system_pods.go:89] "kube-proxy-rkbcp" [c62d3486-15d6-4398-a397-2f542d8fb074] Running
	I0425 18:52:51.386665   24262 system_pods.go:89] "kube-scheduler-ha-912667" [7dc33762-4bee-467e-9db4-d783ffe04992] Running
	I0425 18:52:51.386674   24262 system_pods.go:89] "kube-scheduler-ha-912667-m02" [d2ab7cf9-3cd9-4b0b-aec1-26aee5cf3b2a] Running
	I0425 18:52:51.386681   24262 system_pods.go:89] "kube-vip-ha-912667" [bd3267a7-206d-4e47-b154-a7f17a492684] Running
	I0425 18:52:51.386688   24262 system_pods.go:89] "kube-vip-ha-912667-m02" [c0622f7e-0264-4168-b510-7563083cc9d3] Running
	I0425 18:52:51.386700   24262 system_pods.go:89] "storage-provisioner" [f3a0b111-609d-49b3-a056-71eb4b641224] Running
	I0425 18:52:51.386712   24262 system_pods.go:126] duration metric: took 208.365447ms to wait for k8s-apps to be running ...
	I0425 18:52:51.386724   24262 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 18:52:51.386781   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:52:51.406830   24262 system_svc.go:56] duration metric: took 20.100576ms WaitForService to wait for kubelet
	I0425 18:52:51.406861   24262 kubeadm.go:576] duration metric: took 17.341866618s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 18:52:51.406885   24262 node_conditions.go:102] verifying NodePressure condition ...
	I0425 18:52:51.574275   24262 request.go:629] Waited for 167.322984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes
	I0425 18:52:51.574416   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes
	I0425 18:52:51.574427   24262 round_trippers.go:469] Request Headers:
	I0425 18:52:51.574434   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:52:51.574438   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:52:51.580572   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:52:51.581822   24262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:52:51.581846   24262 node_conditions.go:123] node cpu capacity is 2
	I0425 18:52:51.581856   24262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:52:51.581860   24262 node_conditions.go:123] node cpu capacity is 2
	I0425 18:52:51.581863   24262 node_conditions.go:105] duration metric: took 174.973657ms to run NodePressure ...
	I0425 18:52:51.581873   24262 start.go:240] waiting for startup goroutines ...
	I0425 18:52:51.581917   24262 start.go:254] writing updated cluster config ...
	I0425 18:52:51.583726   24262 out.go:177] 
	I0425 18:52:51.585237   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:52:51.585377   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:52:51.587259   24262 out.go:177] * Starting "ha-912667-m03" control-plane node in "ha-912667" cluster
	I0425 18:52:51.588669   24262 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:52:51.588692   24262 cache.go:56] Caching tarball of preloaded images
	I0425 18:52:51.588771   24262 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 18:52:51.588782   24262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 18:52:51.588864   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:52:51.589031   24262 start.go:360] acquireMachinesLock for ha-912667-m03: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 18:52:51.589070   24262 start.go:364] duration metric: took 20.106µs to acquireMachinesLock for "ha-912667-m03"
	I0425 18:52:51.589086   24262 start.go:93] Provisioning new machine with config: &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:52:51.589179   24262 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0425 18:52:51.590680   24262 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0425 18:52:51.590748   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:52:51.590770   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:52:51.606521   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I0425 18:52:51.606916   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:52:51.607406   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:52:51.607425   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:52:51.607725   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:52:51.607913   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetMachineName
	I0425 18:52:51.608081   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:52:51.608263   24262 start.go:159] libmachine.API.Create for "ha-912667" (driver="kvm2")
	I0425 18:52:51.608288   24262 client.go:168] LocalClient.Create starting
	I0425 18:52:51.608316   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 18:52:51.608344   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:52:51.608358   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:52:51.608405   24262 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 18:52:51.608423   24262 main.go:141] libmachine: Decoding PEM data...
	I0425 18:52:51.608434   24262 main.go:141] libmachine: Parsing certificate...
	I0425 18:52:51.608449   24262 main.go:141] libmachine: Running pre-create checks...
	I0425 18:52:51.608456   24262 main.go:141] libmachine: (ha-912667-m03) Calling .PreCreateCheck
	I0425 18:52:51.608618   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetConfigRaw
	I0425 18:52:51.608993   24262 main.go:141] libmachine: Creating machine...
	I0425 18:52:51.609007   24262 main.go:141] libmachine: (ha-912667-m03) Calling .Create
	I0425 18:52:51.609147   24262 main.go:141] libmachine: (ha-912667-m03) Creating KVM machine...
	I0425 18:52:51.610519   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found existing default KVM network
	I0425 18:52:51.610624   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found existing private KVM network mk-ha-912667
	I0425 18:52:51.610779   24262 main.go:141] libmachine: (ha-912667-m03) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03 ...
	I0425 18:52:51.610808   24262 main.go:141] libmachine: (ha-912667-m03) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 18:52:51.610878   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:51.610771   25320 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:52:51.610966   24262 main.go:141] libmachine: (ha-912667-m03) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 18:52:51.822118   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:51.821973   25320 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa...
	I0425 18:52:51.896531   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:51.896417   25320 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/ha-912667-m03.rawdisk...
	I0425 18:52:51.896568   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Writing magic tar header
	I0425 18:52:51.896578   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Writing SSH key tar header
	I0425 18:52:51.896586   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:51.896528   25320 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03 ...
	I0425 18:52:51.896648   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03
	I0425 18:52:51.896667   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 18:52:51.896685   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03 (perms=drwx------)
	I0425 18:52:51.896731   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:52:51.896760   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 18:52:51.896777   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 18:52:51.896795   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 18:52:51.896809   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 18:52:51.896824   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 18:52:51.896838   24262 main.go:141] libmachine: (ha-912667-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 18:52:51.896852   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 18:52:51.896864   24262 main.go:141] libmachine: (ha-912667-m03) Creating domain...
	I0425 18:52:51.896884   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home/jenkins
	I0425 18:52:51.896896   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Checking permissions on dir: /home
	I0425 18:52:51.896911   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Skipping /home - not owner
	I0425 18:52:51.897780   24262 main.go:141] libmachine: (ha-912667-m03) define libvirt domain using xml: 
	I0425 18:52:51.897797   24262 main.go:141] libmachine: (ha-912667-m03) <domain type='kvm'>
	I0425 18:52:51.897824   24262 main.go:141] libmachine: (ha-912667-m03)   <name>ha-912667-m03</name>
	I0425 18:52:51.897835   24262 main.go:141] libmachine: (ha-912667-m03)   <memory unit='MiB'>2200</memory>
	I0425 18:52:51.897845   24262 main.go:141] libmachine: (ha-912667-m03)   <vcpu>2</vcpu>
	I0425 18:52:51.897859   24262 main.go:141] libmachine: (ha-912667-m03)   <features>
	I0425 18:52:51.897869   24262 main.go:141] libmachine: (ha-912667-m03)     <acpi/>
	I0425 18:52:51.897881   24262 main.go:141] libmachine: (ha-912667-m03)     <apic/>
	I0425 18:52:51.897892   24262 main.go:141] libmachine: (ha-912667-m03)     <pae/>
	I0425 18:52:51.897902   24262 main.go:141] libmachine: (ha-912667-m03)     
	I0425 18:52:51.897930   24262 main.go:141] libmachine: (ha-912667-m03)   </features>
	I0425 18:52:51.897955   24262 main.go:141] libmachine: (ha-912667-m03)   <cpu mode='host-passthrough'>
	I0425 18:52:51.897964   24262 main.go:141] libmachine: (ha-912667-m03)   
	I0425 18:52:51.897974   24262 main.go:141] libmachine: (ha-912667-m03)   </cpu>
	I0425 18:52:51.897983   24262 main.go:141] libmachine: (ha-912667-m03)   <os>
	I0425 18:52:51.897994   24262 main.go:141] libmachine: (ha-912667-m03)     <type>hvm</type>
	I0425 18:52:51.898004   24262 main.go:141] libmachine: (ha-912667-m03)     <boot dev='cdrom'/>
	I0425 18:52:51.898012   24262 main.go:141] libmachine: (ha-912667-m03)     <boot dev='hd'/>
	I0425 18:52:51.898033   24262 main.go:141] libmachine: (ha-912667-m03)     <bootmenu enable='no'/>
	I0425 18:52:51.898051   24262 main.go:141] libmachine: (ha-912667-m03)   </os>
	I0425 18:52:51.898060   24262 main.go:141] libmachine: (ha-912667-m03)   <devices>
	I0425 18:52:51.898070   24262 main.go:141] libmachine: (ha-912667-m03)     <disk type='file' device='cdrom'>
	I0425 18:52:51.898091   24262 main.go:141] libmachine: (ha-912667-m03)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/boot2docker.iso'/>
	I0425 18:52:51.898110   24262 main.go:141] libmachine: (ha-912667-m03)       <target dev='hdc' bus='scsi'/>
	I0425 18:52:51.898123   24262 main.go:141] libmachine: (ha-912667-m03)       <readonly/>
	I0425 18:52:51.898133   24262 main.go:141] libmachine: (ha-912667-m03)     </disk>
	I0425 18:52:51.898144   24262 main.go:141] libmachine: (ha-912667-m03)     <disk type='file' device='disk'>
	I0425 18:52:51.898158   24262 main.go:141] libmachine: (ha-912667-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 18:52:51.898175   24262 main.go:141] libmachine: (ha-912667-m03)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/ha-912667-m03.rawdisk'/>
	I0425 18:52:51.898191   24262 main.go:141] libmachine: (ha-912667-m03)       <target dev='hda' bus='virtio'/>
	I0425 18:52:51.898218   24262 main.go:141] libmachine: (ha-912667-m03)     </disk>
	I0425 18:52:51.898235   24262 main.go:141] libmachine: (ha-912667-m03)     <interface type='network'>
	I0425 18:52:51.898248   24262 main.go:141] libmachine: (ha-912667-m03)       <source network='mk-ha-912667'/>
	I0425 18:52:51.898257   24262 main.go:141] libmachine: (ha-912667-m03)       <model type='virtio'/>
	I0425 18:52:51.898268   24262 main.go:141] libmachine: (ha-912667-m03)     </interface>
	I0425 18:52:51.898280   24262 main.go:141] libmachine: (ha-912667-m03)     <interface type='network'>
	I0425 18:52:51.898293   24262 main.go:141] libmachine: (ha-912667-m03)       <source network='default'/>
	I0425 18:52:51.898309   24262 main.go:141] libmachine: (ha-912667-m03)       <model type='virtio'/>
	I0425 18:52:51.898322   24262 main.go:141] libmachine: (ha-912667-m03)     </interface>
	I0425 18:52:51.898335   24262 main.go:141] libmachine: (ha-912667-m03)     <serial type='pty'>
	I0425 18:52:51.898345   24262 main.go:141] libmachine: (ha-912667-m03)       <target port='0'/>
	I0425 18:52:51.898355   24262 main.go:141] libmachine: (ha-912667-m03)     </serial>
	I0425 18:52:51.898366   24262 main.go:141] libmachine: (ha-912667-m03)     <console type='pty'>
	I0425 18:52:51.898379   24262 main.go:141] libmachine: (ha-912667-m03)       <target type='serial' port='0'/>
	I0425 18:52:51.898395   24262 main.go:141] libmachine: (ha-912667-m03)     </console>
	I0425 18:52:51.898408   24262 main.go:141] libmachine: (ha-912667-m03)     <rng model='virtio'>
	I0425 18:52:51.898420   24262 main.go:141] libmachine: (ha-912667-m03)       <backend model='random'>/dev/random</backend>
	I0425 18:52:51.898441   24262 main.go:141] libmachine: (ha-912667-m03)     </rng>
	I0425 18:52:51.898451   24262 main.go:141] libmachine: (ha-912667-m03)     
	I0425 18:52:51.898470   24262 main.go:141] libmachine: (ha-912667-m03)     
	I0425 18:52:51.898485   24262 main.go:141] libmachine: (ha-912667-m03)   </devices>
	I0425 18:52:51.898505   24262 main.go:141] libmachine: (ha-912667-m03) </domain>
	I0425 18:52:51.898513   24262 main.go:141] libmachine: (ha-912667-m03) 
	I0425 18:52:51.905868   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:3b:cf:2f in network default
	I0425 18:52:51.906430   24262 main.go:141] libmachine: (ha-912667-m03) Ensuring networks are active...
	I0425 18:52:51.906453   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:51.907148   24262 main.go:141] libmachine: (ha-912667-m03) Ensuring network default is active
	I0425 18:52:51.907470   24262 main.go:141] libmachine: (ha-912667-m03) Ensuring network mk-ha-912667 is active
	I0425 18:52:51.907860   24262 main.go:141] libmachine: (ha-912667-m03) Getting domain xml...
	I0425 18:52:51.908577   24262 main.go:141] libmachine: (ha-912667-m03) Creating domain...
	I0425 18:52:53.145546   24262 main.go:141] libmachine: (ha-912667-m03) Waiting to get IP...
	I0425 18:52:53.146295   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:53.146782   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:53.146852   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:53.146756   25320 retry.go:31] will retry after 297.992589ms: waiting for machine to come up
	I0425 18:52:53.446254   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:53.446741   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:53.446772   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:53.446683   25320 retry.go:31] will retry after 302.55332ms: waiting for machine to come up
	I0425 18:52:53.751324   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:53.751803   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:53.751867   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:53.751787   25320 retry.go:31] will retry after 388.619505ms: waiting for machine to come up
	I0425 18:52:54.142472   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:54.142904   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:54.142935   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:54.142855   25320 retry.go:31] will retry after 528.59084ms: waiting for machine to come up
	I0425 18:52:54.672507   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:54.672913   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:54.672941   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:54.672856   25320 retry.go:31] will retry after 623.458204ms: waiting for machine to come up
	I0425 18:52:55.297404   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:55.297882   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:55.297910   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:55.297833   25320 retry.go:31] will retry after 648.625535ms: waiting for machine to come up
	I0425 18:52:55.947623   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:55.947996   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:55.948044   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:55.947970   25320 retry.go:31] will retry after 822.516643ms: waiting for machine to come up
	I0425 18:52:56.772413   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:56.773032   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:56.773057   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:56.772987   25320 retry.go:31] will retry after 1.336973204s: waiting for machine to come up
	I0425 18:52:58.111359   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:58.111843   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:58.111870   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:58.111771   25320 retry.go:31] will retry after 1.545344182s: waiting for machine to come up
	I0425 18:52:59.659246   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:52:59.659703   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:52:59.659728   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:52:59.659658   25320 retry.go:31] will retry after 1.880100949s: waiting for machine to come up
	I0425 18:53:01.541261   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:01.541770   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:53:01.541808   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:53:01.541669   25320 retry.go:31] will retry after 1.940972079s: waiting for machine to come up
	I0425 18:53:03.484587   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:03.485121   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:53:03.485151   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:53:03.485093   25320 retry.go:31] will retry after 2.734995729s: waiting for machine to come up
	I0425 18:53:06.222893   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:06.223400   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:53:06.223433   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:53:06.223350   25320 retry.go:31] will retry after 4.10929529s: waiting for machine to come up
	I0425 18:53:10.335229   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:10.335604   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find current IP address of domain ha-912667-m03 in network mk-ha-912667
	I0425 18:53:10.335632   24262 main.go:141] libmachine: (ha-912667-m03) DBG | I0425 18:53:10.335551   25320 retry.go:31] will retry after 4.681170749s: waiting for machine to come up
	I0425 18:53:15.019237   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.019716   24262 main.go:141] libmachine: (ha-912667-m03) Found IP for machine: 192.168.39.179
	I0425 18:53:15.019739   24262 main.go:141] libmachine: (ha-912667-m03) Reserving static IP address...
	I0425 18:53:15.019750   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has current primary IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.020085   24262 main.go:141] libmachine: (ha-912667-m03) DBG | unable to find host DHCP lease matching {name: "ha-912667-m03", mac: "52:54:00:fb:3e:7a", ip: "192.168.39.179"} in network mk-ha-912667
	I0425 18:53:15.092151   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Getting to WaitForSSH function...
	I0425 18:53:15.092176   24262 main.go:141] libmachine: (ha-912667-m03) Reserved static IP address: 192.168.39.179
	I0425 18:53:15.092225   24262 main.go:141] libmachine: (ha-912667-m03) Waiting for SSH to be available...
	I0425 18:53:15.095142   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.095685   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.095720   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.095980   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Using SSH client type: external
	I0425 18:53:15.096018   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa (-rw-------)
	I0425 18:53:15.096054   24262 main.go:141] libmachine: (ha-912667-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 18:53:15.096069   24262 main.go:141] libmachine: (ha-912667-m03) DBG | About to run SSH command:
	I0425 18:53:15.096084   24262 main.go:141] libmachine: (ha-912667-m03) DBG | exit 0
	I0425 18:53:15.226589   24262 main.go:141] libmachine: (ha-912667-m03) DBG | SSH cmd err, output: <nil>: 
	I0425 18:53:15.226836   24262 main.go:141] libmachine: (ha-912667-m03) KVM machine creation complete!
	I0425 18:53:15.227213   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetConfigRaw
	I0425 18:53:15.227696   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:15.227896   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:15.228064   24262 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 18:53:15.228078   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetState
	I0425 18:53:15.229352   24262 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 18:53:15.229368   24262 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 18:53:15.229375   24262 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 18:53:15.229381   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.232456   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.232927   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.232954   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.233279   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:15.233445   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.233615   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.233819   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:15.233997   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:15.234277   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:15.234295   24262 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 18:53:15.346168   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:53:15.346194   24262 main.go:141] libmachine: Detecting the provisioner...
	I0425 18:53:15.346215   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.348956   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.349351   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.349380   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.349544   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:15.349726   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.349884   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.349995   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:15.350132   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:15.350358   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:15.350370   24262 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 18:53:15.463810   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 18:53:15.463889   24262 main.go:141] libmachine: found compatible host: buildroot
	I0425 18:53:15.463903   24262 main.go:141] libmachine: Provisioning with buildroot...
	I0425 18:53:15.463913   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetMachineName
	I0425 18:53:15.464193   24262 buildroot.go:166] provisioning hostname "ha-912667-m03"
	I0425 18:53:15.464223   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetMachineName
	I0425 18:53:15.464412   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.466951   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.467302   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.467328   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.467545   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:15.467700   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.467854   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.468013   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:15.468331   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:15.468515   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:15.468536   24262 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-912667-m03 && echo "ha-912667-m03" | sudo tee /etc/hostname
	I0425 18:53:15.599786   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-912667-m03
	
	I0425 18:53:15.599819   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.602507   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.602891   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.602921   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.603114   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:15.603337   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.603497   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.603671   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:15.603868   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:15.604024   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:15.604040   24262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-912667-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-912667-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-912667-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 18:53:15.730061   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 18:53:15.730093   24262 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 18:53:15.730116   24262 buildroot.go:174] setting up certificates
	I0425 18:53:15.730126   24262 provision.go:84] configureAuth start
	I0425 18:53:15.730134   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetMachineName
	I0425 18:53:15.730420   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:53:15.733016   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.733412   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.733442   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.733549   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.735702   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.736039   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.736066   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.736212   24262 provision.go:143] copyHostCerts
	I0425 18:53:15.736246   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:53:15.736285   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 18:53:15.736295   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 18:53:15.736390   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 18:53:15.736495   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:53:15.736522   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 18:53:15.736532   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 18:53:15.736571   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 18:53:15.736639   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:53:15.736665   24262 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 18:53:15.736674   24262 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 18:53:15.736704   24262 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 18:53:15.736785   24262 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.ha-912667-m03 san=[127.0.0.1 192.168.39.179 ha-912667-m03 localhost minikube]
	I0425 18:53:15.922828   24262 provision.go:177] copyRemoteCerts
	I0425 18:53:15.922899   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 18:53:15.922930   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:15.925985   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.926326   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:15.926354   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:15.926562   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:15.926761   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:15.926909   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:15.927047   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:53:16.015167   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 18:53:16.015242   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 18:53:16.044577   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 18:53:16.044645   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0425 18:53:16.070841   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 18:53:16.070920   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 18:53:16.097754   24262 provision.go:87] duration metric: took 367.611542ms to configureAuth
	I0425 18:53:16.097784   24262 buildroot.go:189] setting minikube options for container-runtime
	I0425 18:53:16.098040   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:53:16.098130   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:16.100878   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.101337   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.101367   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.101525   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:16.101731   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.101938   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.102080   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:16.102283   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:16.102481   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:16.102504   24262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 18:53:16.402337   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 18:53:16.402367   24262 main.go:141] libmachine: Checking connection to Docker...
	I0425 18:53:16.402377   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetURL
	I0425 18:53:16.403540   24262 main.go:141] libmachine: (ha-912667-m03) DBG | Using libvirt version 6000000
	I0425 18:53:16.405969   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.406391   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.406425   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.406597   24262 main.go:141] libmachine: Docker is up and running!
	I0425 18:53:16.406613   24262 main.go:141] libmachine: Reticulating splines...
	I0425 18:53:16.406621   24262 client.go:171] duration metric: took 24.798324995s to LocalClient.Create
	I0425 18:53:16.406648   24262 start.go:167] duration metric: took 24.798385221s to libmachine.API.Create "ha-912667"
	I0425 18:53:16.406659   24262 start.go:293] postStartSetup for "ha-912667-m03" (driver="kvm2")
	I0425 18:53:16.406671   24262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 18:53:16.406693   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:16.406934   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 18:53:16.406962   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:16.409598   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.410161   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.410193   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.410382   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:16.410568   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.410744   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:16.410892   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:53:16.503631   24262 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 18:53:16.508930   24262 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 18:53:16.508950   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 18:53:16.509032   24262 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 18:53:16.509115   24262 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 18:53:16.509124   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 18:53:16.509215   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 18:53:16.520806   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:53:16.549619   24262 start.go:296] duration metric: took 142.947257ms for postStartSetup
	I0425 18:53:16.549668   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetConfigRaw
	I0425 18:53:16.550310   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:53:16.552882   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.553328   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.553356   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.553596   24262 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:53:16.553787   24262 start.go:128] duration metric: took 24.964599205s to createHost
	I0425 18:53:16.553811   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:16.556093   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.556461   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.556490   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.556589   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:16.556775   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.556963   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.557130   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:16.557327   24262 main.go:141] libmachine: Using SSH client type: native
	I0425 18:53:16.557538   24262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I0425 18:53:16.557556   24262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 18:53:16.672263   24262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714071196.642856982
	
	I0425 18:53:16.672288   24262 fix.go:216] guest clock: 1714071196.642856982
	I0425 18:53:16.672298   24262 fix.go:229] Guest: 2024-04-25 18:53:16.642856982 +0000 UTC Remote: 2024-04-25 18:53:16.553800383 +0000 UTC m=+221.129214256 (delta=89.056599ms)
	I0425 18:53:16.672333   24262 fix.go:200] guest clock delta is within tolerance: 89.056599ms
	I0425 18:53:16.672338   24262 start.go:83] releasing machines lock for "ha-912667-m03", held for 25.083259716s
	I0425 18:53:16.672356   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:16.672655   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:53:16.675500   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.676078   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.676140   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.678596   24262 out.go:177] * Found network options:
	I0425 18:53:16.679994   24262 out.go:177]   - NO_PROXY=192.168.39.189,192.168.39.66
	W0425 18:53:16.681519   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	W0425 18:53:16.681544   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 18:53:16.681558   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:16.682180   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:16.682411   24262 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:53:16.682520   24262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 18:53:16.682558   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	W0425 18:53:16.682649   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	W0425 18:53:16.682682   24262 proxy.go:119] fail to check proxy env: Error ip not in block
	I0425 18:53:16.682779   24262 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 18:53:16.682803   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:53:16.685470   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.685546   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.685935   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.685961   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.685990   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:16.686004   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:16.686233   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:16.686312   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:53:16.686458   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.686477   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:53:16.686623   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:16.686669   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:53:16.686746   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:53:16.686847   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:53:16.934872   24262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 18:53:16.941872   24262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 18:53:16.941929   24262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 18:53:16.962537   24262 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 18:53:16.962558   24262 start.go:494] detecting cgroup driver to use...
	I0425 18:53:16.962615   24262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 18:53:16.980186   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 18:53:16.997938   24262 docker.go:217] disabling cri-docker service (if available) ...
	I0425 18:53:16.997995   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 18:53:17.013248   24262 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 18:53:17.029156   24262 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 18:53:17.148560   24262 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 18:53:17.303796   24262 docker.go:233] disabling docker service ...
	I0425 18:53:17.303879   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 18:53:17.321798   24262 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 18:53:17.336439   24262 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 18:53:17.488152   24262 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 18:53:17.626994   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 18:53:17.642591   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 18:53:17.662872   24262 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 18:53:17.662948   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.674109   24262 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 18:53:17.674160   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.685617   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.698662   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.710613   24262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 18:53:17.722305   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.733467   24262 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.752260   24262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 18:53:17.764484   24262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 18:53:17.776224   24262 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 18:53:17.776297   24262 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 18:53:17.791800   24262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 18:53:17.803882   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:53:17.936848   24262 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 18:53:18.107505   24262 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 18:53:18.107580   24262 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 18:53:18.113331   24262 start.go:562] Will wait 60s for crictl version
	I0425 18:53:18.113379   24262 ssh_runner.go:195] Run: which crictl
	I0425 18:53:18.118070   24262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 18:53:18.158674   24262 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 18:53:18.158758   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:53:18.192445   24262 ssh_runner.go:195] Run: crio --version
	I0425 18:53:18.235932   24262 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 18:53:18.237318   24262 out.go:177]   - env NO_PROXY=192.168.39.189
	I0425 18:53:18.238717   24262 out.go:177]   - env NO_PROXY=192.168.39.189,192.168.39.66
	I0425 18:53:18.240178   24262 main.go:141] libmachine: (ha-912667-m03) Calling .GetIP
	I0425 18:53:18.242594   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:18.242972   24262 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:53:18.242994   24262 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:53:18.243230   24262 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 18:53:18.248298   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:53:18.264425   24262 mustload.go:65] Loading cluster: ha-912667
	I0425 18:53:18.264708   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:53:18.265051   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:53:18.265100   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:53:18.281459   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44043
	I0425 18:53:18.281926   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:53:18.282451   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:53:18.282475   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:53:18.282795   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:53:18.282986   24262 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 18:53:18.284711   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:53:18.284990   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:53:18.285025   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:53:18.299787   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46883
	I0425 18:53:18.300198   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:53:18.300683   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:53:18.300707   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:53:18.301018   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:53:18.301240   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:53:18.301427   24262 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667 for IP: 192.168.39.179
	I0425 18:53:18.301438   24262 certs.go:194] generating shared ca certs ...
	I0425 18:53:18.301452   24262 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:53:18.301608   24262 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 18:53:18.301661   24262 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 18:53:18.301673   24262 certs.go:256] generating profile certs ...
	I0425 18:53:18.301765   24262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key
	I0425 18:53:18.301798   24262 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.8f7228b6
	I0425 18:53:18.301821   24262 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.8f7228b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189 192.168.39.66 192.168.39.179 192.168.39.254]
	I0425 18:53:18.432850   24262 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.8f7228b6 ...
	I0425 18:53:18.432878   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.8f7228b6: {Name:mk6e41bd710998fe356ce65f93113c2167092d8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:53:18.433039   24262 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.8f7228b6 ...
	I0425 18:53:18.433051   24262 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.8f7228b6: {Name:mkf31c6c2f1c1bc77655aa623ce0d079f6c7a498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:53:18.433119   24262 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.8f7228b6 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt
	I0425 18:53:18.433240   24262 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.8f7228b6 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key
	I0425 18:53:18.433358   24262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key
	I0425 18:53:18.433373   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 18:53:18.433386   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 18:53:18.433399   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 18:53:18.433412   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 18:53:18.433424   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 18:53:18.433436   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 18:53:18.433449   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 18:53:18.433461   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 18:53:18.433515   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 18:53:18.433548   24262 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 18:53:18.433555   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 18:53:18.433576   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 18:53:18.433598   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 18:53:18.433618   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 18:53:18.433656   24262 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 18:53:18.433726   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 18:53:18.433741   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:53:18.433750   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 18:53:18.433777   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:53:18.436934   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:53:18.437353   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:53:18.437398   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:53:18.437609   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:53:18.437787   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:53:18.437921   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:53:18.438039   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:53:18.514578   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0425 18:53:18.520594   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0425 18:53:18.534986   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0425 18:53:18.540597   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0425 18:53:18.554830   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0425 18:53:18.560363   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0425 18:53:18.574403   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0425 18:53:18.579401   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0425 18:53:18.592339   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0425 18:53:18.597297   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0425 18:53:18.609908   24262 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0425 18:53:18.614992   24262 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0425 18:53:18.629538   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 18:53:18.659495   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 18:53:18.688248   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 18:53:18.716123   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 18:53:18.745411   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0425 18:53:18.774655   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 18:53:18.803856   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 18:53:18.834607   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0425 18:53:18.864115   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 18:53:18.893731   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 18:53:18.923651   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 18:53:18.951795   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0425 18:53:18.971502   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0425 18:53:18.990777   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0425 18:53:19.009285   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0425 18:53:19.027525   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0425 18:53:19.047213   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0425 18:53:19.065355   24262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0425 18:53:19.083746   24262 ssh_runner.go:195] Run: openssl version
	I0425 18:53:19.090003   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 18:53:19.104003   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:53:19.109596   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:53:19.109652   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 18:53:19.116334   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 18:53:19.128996   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 18:53:19.142687   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 18:53:19.148332   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 18:53:19.148395   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 18:53:19.155004   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 18:53:19.167760   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 18:53:19.180460   24262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 18:53:19.186119   24262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 18:53:19.186181   24262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 18:53:19.192673   24262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 18:53:19.204764   24262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 18:53:19.209519   24262 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 18:53:19.209577   24262 kubeadm.go:928] updating node {m03 192.168.39.179 8443 v1.30.0 crio true true} ...
	I0425 18:53:19.209668   24262 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-912667-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 18:53:19.209696   24262 kube-vip.go:111] generating kube-vip config ...
	I0425 18:53:19.209738   24262 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0425 18:53:19.229688   24262 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0425 18:53:19.229755   24262 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0425 18:53:19.229808   24262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 18:53:19.240912   24262 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0425 18:53:19.240967   24262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0425 18:53:19.251672   24262 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0425 18:53:19.251686   24262 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0425 18:53:19.251693   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0425 18:53:19.251700   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0425 18:53:19.251747   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0425 18:53:19.251750   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0425 18:53:19.251685   24262 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0425 18:53:19.251802   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:53:19.270889   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0425 18:53:19.270937   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0425 18:53:19.270960   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0425 18:53:19.270967   24262 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0425 18:53:19.270997   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0425 18:53:19.271051   24262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0425 18:53:19.310844   24262 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0425 18:53:19.310882   24262 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0425 18:53:20.311066   24262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0425 18:53:20.323330   24262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0425 18:53:20.345409   24262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 18:53:20.366291   24262 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0425 18:53:20.387008   24262 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0425 18:53:20.391355   24262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 18:53:20.407468   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:53:20.560904   24262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:53:20.581539   24262 host.go:66] Checking if "ha-912667" exists ...
	I0425 18:53:20.582032   24262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:53:20.582079   24262 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:53:20.597302   24262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0425 18:53:20.598195   24262 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:53:20.598694   24262 main.go:141] libmachine: Using API Version  1
	I0425 18:53:20.598723   24262 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:53:20.599086   24262 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:53:20.599259   24262 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 18:53:20.599395   24262 start.go:316] joinCluster: &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:53:20.599557   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0425 18:53:20.599580   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 18:53:20.602619   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:53:20.603063   24262 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 18:53:20.603090   24262 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 18:53:20.603207   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 18:53:20.603340   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 18:53:20.603526   24262 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 18:53:20.603656   24262 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 18:53:20.779660   24262 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:53:20.779707   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e104gh.sh6getxhhtdg6ymu --discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-912667-m03 --control-plane --apiserver-advertise-address=192.168.39.179 --apiserver-bind-port=8443"
	I0425 18:53:46.324259   24262 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e104gh.sh6getxhhtdg6ymu --discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-912667-m03 --control-plane --apiserver-advertise-address=192.168.39.179 --apiserver-bind-port=8443": (25.5445293s)
	I0425 18:53:46.324294   24262 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0425 18:53:46.971782   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-912667-m03 minikube.k8s.io/updated_at=2024_04_25T18_53_46_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=ha-912667 minikube.k8s.io/primary=false
	I0425 18:53:47.102167   24262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-912667-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0425 18:53:47.240730   24262 start.go:318] duration metric: took 26.641328067s to joinCluster
	I0425 18:53:47.240864   24262 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 18:53:47.242322   24262 out.go:177] * Verifying Kubernetes components...
	I0425 18:53:47.241205   24262 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:53:47.243591   24262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 18:53:47.541877   24262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 18:53:47.585988   24262 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:53:47.586359   24262 kapi.go:59] client config for ha-912667: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.crt", KeyFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key", CAFile:"/home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0425 18:53:47.586443   24262 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.189:8443
	I0425 18:53:47.586700   24262 node_ready.go:35] waiting up to 6m0s for node "ha-912667-m03" to be "Ready" ...
	I0425 18:53:47.586845   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:47.586860   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:47.586870   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:47.586877   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:47.590835   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:48.087327   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:48.087356   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:48.087374   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:48.087379   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:48.092821   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:53:48.587267   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:48.587294   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:48.587305   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:48.587312   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:48.635333   24262 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I0425 18:53:49.087496   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:49.087523   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:49.087536   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:49.087545   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:49.091777   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:49.587190   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:49.587218   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:49.587228   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:49.587235   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:49.590725   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:49.591485   24262 node_ready.go:53] node "ha-912667-m03" has status "Ready":"False"
	I0425 18:53:50.087741   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:50.087762   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:50.087769   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:50.087774   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:50.092367   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:50.587385   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:50.587410   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:50.587420   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:50.587426   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:50.591571   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:51.087336   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:51.087358   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:51.087365   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:51.087370   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:51.091431   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:51.587477   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:51.587501   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:51.587509   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:51.587513   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:51.591781   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:51.592464   24262 node_ready.go:53] node "ha-912667-m03" has status "Ready":"False"
	I0425 18:53:52.087079   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:52.087104   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:52.087114   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:52.087126   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:52.091475   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:52.587954   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:52.587984   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:52.587997   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:52.588003   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:52.592216   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:53.086916   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:53.086943   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:53.086955   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:53.086960   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:53.091541   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:53.587419   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:53.587441   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:53.587450   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:53.587454   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:53.591776   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:54.087492   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:54.087521   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:54.087532   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:54.087538   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:54.093770   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:53:54.094256   24262 node_ready.go:53] node "ha-912667-m03" has status "Ready":"False"
	I0425 18:53:54.587146   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:54.587174   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:54.587182   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:54.587186   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:54.591607   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:55.087514   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:55.087542   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.087554   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.087560   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.106191   24262 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0425 18:53:55.107030   24262 node_ready.go:49] node "ha-912667-m03" has status "Ready":"True"
	I0425 18:53:55.107059   24262 node_ready.go:38] duration metric: took 7.520333617s for node "ha-912667-m03" to be "Ready" ...
	I0425 18:53:55.107070   24262 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:53:55.107148   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:53:55.107163   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.107173   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.107179   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.134362   24262 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0425 18:53:55.140632   24262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.140724   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-22wvx
	I0425 18:53:55.140739   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.140750   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.140756   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.150957   24262 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0425 18:53:55.151573   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:55.151593   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.151604   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.151610   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.154891   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:55.155456   24262 pod_ready.go:92] pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.155478   24262 pod_ready.go:81] duration metric: took 14.817716ms for pod "coredns-7db6d8ff4d-22wvx" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.155490   24262 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.155558   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-h4s2h
	I0425 18:53:55.155569   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.155578   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.155582   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.158241   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:53:55.159287   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:55.159305   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.159315   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.159320   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.161876   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:53:55.162467   24262 pod_ready.go:92] pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.162486   24262 pod_ready.go:81] duration metric: took 6.988369ms for pod "coredns-7db6d8ff4d-h4s2h" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.162499   24262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.162565   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667
	I0425 18:53:55.162575   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.162585   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.162594   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.166084   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:55.167057   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:55.167070   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.167076   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.167081   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.171470   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:55.172158   24262 pod_ready.go:92] pod "etcd-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.172181   24262 pod_ready.go:81] duration metric: took 9.671098ms for pod "etcd-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.172193   24262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.172259   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m02
	I0425 18:53:55.172272   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.172281   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.172286   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.176266   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:55.177785   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:53:55.177801   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.177810   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.177813   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.180264   24262 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0425 18:53:55.180897   24262 pod_ready.go:92] pod "etcd-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.180912   24262 pod_ready.go:81] duration metric: took 8.711147ms for pod "etcd-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.180924   24262 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.288235   24262 request.go:629] Waited for 107.243045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m03
	I0425 18:53:55.288330   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/etcd-ha-912667-m03
	I0425 18:53:55.288338   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.288349   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.288355   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.294122   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:53:55.488365   24262 request.go:629] Waited for 193.451029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:55.488418   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:55.488424   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.488430   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.488433   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.493693   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:53:55.494314   24262 pod_ready.go:92] pod "etcd-ha-912667-m03" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.494339   24262 pod_ready.go:81] duration metric: took 313.407013ms for pod "etcd-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.494367   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.688199   24262 request.go:629] Waited for 193.737053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667
	I0425 18:53:55.688262   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667
	I0425 18:53:55.688268   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.688275   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.688280   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.692067   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:55.888501   24262 request.go:629] Waited for 195.38776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:55.888560   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:55.888567   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:55.888590   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:55.888599   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:55.892153   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:55.892950   24262 pod_ready.go:92] pod "kube-apiserver-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:55.892969   24262 pod_ready.go:81] duration metric: took 398.590637ms for pod "kube-apiserver-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:55.892978   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:56.088062   24262 request.go:629] Waited for 195.015479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:53:56.088131   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m02
	I0425 18:53:56.088137   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:56.088147   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:56.088155   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:56.093110   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:56.287650   24262 request.go:629] Waited for 193.321791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:53:56.287747   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:53:56.287765   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:56.287776   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:56.287782   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:56.293910   24262 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0425 18:53:56.294517   24262 pod_ready.go:92] pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:56.294535   24262 pod_ready.go:81] duration metric: took 401.549867ms for pod "kube-apiserver-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:56.294544   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:56.487547   24262 request.go:629] Waited for 192.942824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:56.487612   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:56.487617   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:56.487625   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:56.487629   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:56.491542   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:56.687978   24262 request.go:629] Waited for 195.305945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:56.688082   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:56.688090   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:56.688105   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:56.688116   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:56.692382   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:56.888580   24262 request.go:629] Waited for 93.275877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:56.888650   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:56.888658   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:56.888669   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:56.888673   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:56.893577   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:57.087676   24262 request.go:629] Waited for 193.27677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.087745   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.087756   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:57.087776   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:57.087799   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:57.091355   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:57.295147   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:57.295170   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:57.295177   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:57.295181   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:57.299173   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:57.488322   24262 request.go:629] Waited for 188.346006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.488413   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.488422   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:57.488434   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:57.488441   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:57.493000   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:57.794794   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:57.794819   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:57.794827   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:57.794830   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:57.798277   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:57.888510   24262 request.go:629] Waited for 89.282261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.888563   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:57.888570   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:57.888580   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:57.888586   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:57.892567   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:58.294683   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:58.294702   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:58.294710   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:58.294714   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:58.298942   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:58.299686   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:58.299702   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:58.299709   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:58.299713   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:58.303622   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:58.304600   24262 pod_ready.go:102] pod "kube-apiserver-ha-912667-m03" in "kube-system" namespace has status "Ready":"False"
	I0425 18:53:58.795718   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:58.795745   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:58.795756   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:58.795760   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:58.800977   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:53:58.801943   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:58.801967   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:58.801978   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:58.801985   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:58.806113   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:53:59.295432   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-912667-m03
	I0425 18:53:59.295462   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.295470   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.295475   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.299284   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.300323   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:53:59.300340   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.300347   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.300352   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.304215   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.304915   24262 pod_ready.go:92] pod "kube-apiserver-ha-912667-m03" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:59.304935   24262 pod_ready.go:81] duration metric: took 3.010384418s for pod "kube-apiserver-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:59.304949   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:59.305011   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667
	I0425 18:53:59.305022   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.305032   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.305038   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.308834   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.487798   24262 request.go:629] Waited for 178.313383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:59.487865   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:53:59.487873   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.487883   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.487892   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.491597   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.492224   24262 pod_ready.go:92] pod "kube-controller-manager-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:59.492251   24262 pod_ready.go:81] duration metric: took 187.292003ms for pod "kube-controller-manager-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:59.492266   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:59.688448   24262 request.go:629] Waited for 196.118207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:53:59.688514   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m02
	I0425 18:53:59.688522   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.688542   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.688569   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.692519   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.888234   24262 request.go:629] Waited for 195.027515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:53:59.888315   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:53:59.888324   24262 round_trippers.go:469] Request Headers:
	I0425 18:53:59.888331   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:53:59.888344   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:53:59.892038   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:53:59.892717   24262 pod_ready.go:92] pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:53:59.892734   24262 pod_ready.go:81] duration metric: took 400.460928ms for pod "kube-controller-manager-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:53:59.892744   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:00.087909   24262 request.go:629] Waited for 195.107362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m03
	I0425 18:54:00.087990   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-912667-m03
	I0425 18:54:00.088001   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:00.088009   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:00.088013   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:00.092611   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:00.287913   24262 request.go:629] Waited for 194.380558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:00.287995   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:00.288004   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:00.288014   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:00.288024   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:00.291982   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:00.292916   24262 pod_ready.go:92] pod "kube-controller-manager-ha-912667-m03" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:00.292936   24262 pod_ready.go:81] duration metric: took 400.186731ms for pod "kube-controller-manager-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:00.292947   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9zxln" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:00.488111   24262 request.go:629] Waited for 195.107324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zxln
	I0425 18:54:00.488186   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9zxln
	I0425 18:54:00.488192   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:00.488200   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:00.488214   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:00.492219   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:00.688120   24262 request.go:629] Waited for 194.770439ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:00.688183   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:00.688190   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:00.688198   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:00.688203   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:00.691756   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:00.692449   24262 pod_ready.go:92] pod "kube-proxy-9zxln" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:00.692469   24262 pod_ready.go:81] duration metric: took 399.51603ms for pod "kube-proxy-9zxln" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:00.692483   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mkgv5" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:00.888474   24262 request.go:629] Waited for 195.922903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkgv5
	I0425 18:54:00.888569   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mkgv5
	I0425 18:54:00.888581   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:00.888589   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:00.888593   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:00.893765   24262 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0425 18:54:01.088008   24262 request.go:629] Waited for 193.382615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:54:01.088070   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:54:01.088077   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:01.088088   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:01.088094   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:01.092407   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:01.093369   24262 pod_ready.go:92] pod "kube-proxy-mkgv5" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:01.093394   24262 pod_ready.go:81] duration metric: took 400.90273ms for pod "kube-proxy-mkgv5" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:01.093408   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rkbcp" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:01.288497   24262 request.go:629] Waited for 195.011294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rkbcp
	I0425 18:54:01.288592   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rkbcp
	I0425 18:54:01.288601   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:01.288609   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:01.288613   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:01.292744   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:01.487659   24262 request.go:629] Waited for 194.314073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:54:01.487736   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:54:01.487742   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:01.487750   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:01.487755   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:01.492230   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:01.493257   24262 pod_ready.go:92] pod "kube-proxy-rkbcp" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:01.493287   24262 pod_ready.go:81] duration metric: took 399.871904ms for pod "kube-proxy-rkbcp" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:01.493300   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:01.687824   24262 request.go:629] Waited for 194.379121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667
	I0425 18:54:01.687892   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667
	I0425 18:54:01.687900   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:01.687912   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:01.687919   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:01.691711   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:01.887933   24262 request.go:629] Waited for 195.363443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:54:01.888029   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667
	I0425 18:54:01.888042   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:01.888053   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:01.888059   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:01.892043   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:01.892973   24262 pod_ready.go:92] pod "kube-scheduler-ha-912667" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:01.892997   24262 pod_ready.go:81] duration metric: took 399.688109ms for pod "kube-scheduler-ha-912667" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:01.893010   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:02.088084   24262 request.go:629] Waited for 194.983596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m02
	I0425 18:54:02.088148   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m02
	I0425 18:54:02.088156   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.088164   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.088172   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.092045   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:02.288459   24262 request.go:629] Waited for 195.383107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:54:02.288515   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m02
	I0425 18:54:02.288521   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.288529   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.288534   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.293069   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:02.294069   24262 pod_ready.go:92] pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:02.294086   24262 pod_ready.go:81] duration metric: took 401.060695ms for pod "kube-scheduler-ha-912667-m02" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:02.294095   24262 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:02.488379   24262 request.go:629] Waited for 194.220272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m03
	I0425 18:54:02.488491   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-912667-m03
	I0425 18:54:02.488515   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.488529   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.488550   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.493344   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:02.687823   24262 request.go:629] Waited for 193.364395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:02.687923   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes/ha-912667-m03
	I0425 18:54:02.687935   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.687946   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.687957   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.691918   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:02.692539   24262 pod_ready.go:92] pod "kube-scheduler-ha-912667-m03" in "kube-system" namespace has status "Ready":"True"
	I0425 18:54:02.692564   24262 pod_ready.go:81] duration metric: took 398.460848ms for pod "kube-scheduler-ha-912667-m03" in "kube-system" namespace to be "Ready" ...
	I0425 18:54:02.692578   24262 pod_ready.go:38] duration metric: took 7.585495691s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 18:54:02.692595   24262 api_server.go:52] waiting for apiserver process to appear ...
	I0425 18:54:02.692656   24262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 18:54:02.708782   24262 api_server.go:72] duration metric: took 15.467874327s to wait for apiserver process to appear ...
	I0425 18:54:02.708812   24262 api_server.go:88] waiting for apiserver healthz status ...
	I0425 18:54:02.708837   24262 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I0425 18:54:02.713298   24262 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I0425 18:54:02.713374   24262 round_trippers.go:463] GET https://192.168.39.189:8443/version
	I0425 18:54:02.713385   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.713398   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.713408   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.714582   24262 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0425 18:54:02.714713   24262 api_server.go:141] control plane version: v1.30.0
	I0425 18:54:02.714730   24262 api_server.go:131] duration metric: took 5.911686ms to wait for apiserver health ...
	I0425 18:54:02.714736   24262 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 18:54:02.888023   24262 request.go:629] Waited for 173.221604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:54:02.888107   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:54:02.888118   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:02.888140   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:02.888166   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:02.898312   24262 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0425 18:54:02.907148   24262 system_pods.go:59] 24 kube-system pods found
	I0425 18:54:02.907177   24262 system_pods.go:61] "coredns-7db6d8ff4d-22wvx" [56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e] Running
	I0425 18:54:02.907182   24262 system_pods.go:61] "coredns-7db6d8ff4d-h4s2h" [f9e2233c-5350-47ab-bdae-6fa35972b601] Running
	I0425 18:54:02.907186   24262 system_pods.go:61] "etcd-ha-912667" [d18fe5ec-655e-4da4-b8de-782eef846d55] Running
	I0425 18:54:02.907189   24262 system_pods.go:61] "etcd-ha-912667-m02" [8d6782f6-b00b-4d10-8a3a-452460974164] Running
	I0425 18:54:02.907192   24262 system_pods.go:61] "etcd-ha-912667-m03" [24ac9b8b-9f01-4edb-b82d-8bca7df1a74f] Running
	I0425 18:54:02.907196   24262 system_pods.go:61] "kindnet-gcbv6" [03aab1af-e03a-4ff7-bb92-6d22c1dd8d2a] Running
	I0425 18:54:02.907200   24262 system_pods.go:61] "kindnet-sq4lb" [049d5dc9-13ec-4135-8785-229071e57d1a] Running
	I0425 18:54:02.907203   24262 system_pods.go:61] "kindnet-xlvjt" [191ff28e-07d7-459e-afe5-e3d8c23e1016] Running
	I0425 18:54:02.907205   24262 system_pods.go:61] "kube-apiserver-ha-912667" [a8339e9c-d67f-4e84-ba79-754ad86fdf82] Running
	I0425 18:54:02.907209   24262 system_pods.go:61] "kube-apiserver-ha-912667-m02" [a420b2a1-207a-435f-98d2-893836a60e78] Running
	I0425 18:54:02.907212   24262 system_pods.go:61] "kube-apiserver-ha-912667-m03" [57c42509-6b00-4e6c-aec0-2780dcb8287e] Running
	I0425 18:54:02.907216   24262 system_pods.go:61] "kube-controller-manager-ha-912667" [6a91aebd-e142-4165-8acb-cc4c49a5df54] Running
	I0425 18:54:02.907219   24262 system_pods.go:61] "kube-controller-manager-ha-912667-m02" [e94e1a60-af79-4a8e-ac11-e7d36c3d68a3] Running
	I0425 18:54:02.907222   24262 system_pods.go:61] "kube-controller-manager-ha-912667-m03" [ed05c95f-7f91-4849-bbf6-0f140d571a46] Running
	I0425 18:54:02.907226   24262 system_pods.go:61] "kube-proxy-9zxln" [96e7485d-d971-49f2-9505-731cdf2f23ab] Running
	I0425 18:54:02.907231   24262 system_pods.go:61] "kube-proxy-mkgv5" [7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a] Running
	I0425 18:54:02.907235   24262 system_pods.go:61] "kube-proxy-rkbcp" [c62d3486-15d6-4398-a397-2f542d8fb074] Running
	I0425 18:54:02.907241   24262 system_pods.go:61] "kube-scheduler-ha-912667" [7dc33762-4bee-467e-9db4-d783ffe04992] Running
	I0425 18:54:02.907249   24262 system_pods.go:61] "kube-scheduler-ha-912667-m02" [d2ab7cf9-3cd9-4b0b-aec1-26aee5cf3b2a] Running
	I0425 18:54:02.907254   24262 system_pods.go:61] "kube-scheduler-ha-912667-m03" [f42a0409-358a-412a-a20e-0dd00e4e7fe3] Running
	I0425 18:54:02.907262   24262 system_pods.go:61] "kube-vip-ha-912667" [bd3267a7-206d-4e47-b154-a7f17a492684] Running
	I0425 18:54:02.907267   24262 system_pods.go:61] "kube-vip-ha-912667-m02" [c0622f7e-0264-4168-b510-7563083cc9d3] Running
	I0425 18:54:02.907274   24262 system_pods.go:61] "kube-vip-ha-912667-m03" [206ce495-8d7a-404d-ba1a-34edfa189d10] Running
	I0425 18:54:02.907279   24262 system_pods.go:61] "storage-provisioner" [f3a0b111-609d-49b3-a056-71eb4b641224] Running
	I0425 18:54:02.907290   24262 system_pods.go:74] duration metric: took 192.54719ms to wait for pod list to return data ...
	I0425 18:54:02.907303   24262 default_sa.go:34] waiting for default service account to be created ...
	I0425 18:54:03.087577   24262 request.go:629] Waited for 180.195404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/default/serviceaccounts
	I0425 18:54:03.087632   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/default/serviceaccounts
	I0425 18:54:03.087637   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:03.087644   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:03.087648   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:03.091310   24262 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0425 18:54:03.091439   24262 default_sa.go:45] found service account: "default"
	I0425 18:54:03.091457   24262 default_sa.go:55] duration metric: took 184.144541ms for default service account to be created ...
	I0425 18:54:03.091469   24262 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 18:54:03.287883   24262 request.go:629] Waited for 196.339848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:54:03.287947   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/namespaces/kube-system/pods
	I0425 18:54:03.287955   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:03.287978   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:03.287985   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:03.296722   24262 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0425 18:54:03.303343   24262 system_pods.go:86] 24 kube-system pods found
	I0425 18:54:03.303368   24262 system_pods.go:89] "coredns-7db6d8ff4d-22wvx" [56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e] Running
	I0425 18:54:03.303373   24262 system_pods.go:89] "coredns-7db6d8ff4d-h4s2h" [f9e2233c-5350-47ab-bdae-6fa35972b601] Running
	I0425 18:54:03.303378   24262 system_pods.go:89] "etcd-ha-912667" [d18fe5ec-655e-4da4-b8de-782eef846d55] Running
	I0425 18:54:03.303383   24262 system_pods.go:89] "etcd-ha-912667-m02" [8d6782f6-b00b-4d10-8a3a-452460974164] Running
	I0425 18:54:03.303387   24262 system_pods.go:89] "etcd-ha-912667-m03" [24ac9b8b-9f01-4edb-b82d-8bca7df1a74f] Running
	I0425 18:54:03.303391   24262 system_pods.go:89] "kindnet-gcbv6" [03aab1af-e03a-4ff7-bb92-6d22c1dd8d2a] Running
	I0425 18:54:03.303395   24262 system_pods.go:89] "kindnet-sq4lb" [049d5dc9-13ec-4135-8785-229071e57d1a] Running
	I0425 18:54:03.303398   24262 system_pods.go:89] "kindnet-xlvjt" [191ff28e-07d7-459e-afe5-e3d8c23e1016] Running
	I0425 18:54:03.303403   24262 system_pods.go:89] "kube-apiserver-ha-912667" [a8339e9c-d67f-4e84-ba79-754ad86fdf82] Running
	I0425 18:54:03.303407   24262 system_pods.go:89] "kube-apiserver-ha-912667-m02" [a420b2a1-207a-435f-98d2-893836a60e78] Running
	I0425 18:54:03.303411   24262 system_pods.go:89] "kube-apiserver-ha-912667-m03" [57c42509-6b00-4e6c-aec0-2780dcb8287e] Running
	I0425 18:54:03.303416   24262 system_pods.go:89] "kube-controller-manager-ha-912667" [6a91aebd-e142-4165-8acb-cc4c49a5df54] Running
	I0425 18:54:03.303421   24262 system_pods.go:89] "kube-controller-manager-ha-912667-m02" [e94e1a60-af79-4a8e-ac11-e7d36c3d68a3] Running
	I0425 18:54:03.303425   24262 system_pods.go:89] "kube-controller-manager-ha-912667-m03" [ed05c95f-7f91-4849-bbf6-0f140d571a46] Running
	I0425 18:54:03.303428   24262 system_pods.go:89] "kube-proxy-9zxln" [96e7485d-d971-49f2-9505-731cdf2f23ab] Running
	I0425 18:54:03.303432   24262 system_pods.go:89] "kube-proxy-mkgv5" [7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a] Running
	I0425 18:54:03.303435   24262 system_pods.go:89] "kube-proxy-rkbcp" [c62d3486-15d6-4398-a397-2f542d8fb074] Running
	I0425 18:54:03.303439   24262 system_pods.go:89] "kube-scheduler-ha-912667" [7dc33762-4bee-467e-9db4-d783ffe04992] Running
	I0425 18:54:03.303446   24262 system_pods.go:89] "kube-scheduler-ha-912667-m02" [d2ab7cf9-3cd9-4b0b-aec1-26aee5cf3b2a] Running
	I0425 18:54:03.303449   24262 system_pods.go:89] "kube-scheduler-ha-912667-m03" [f42a0409-358a-412a-a20e-0dd00e4e7fe3] Running
	I0425 18:54:03.303452   24262 system_pods.go:89] "kube-vip-ha-912667" [bd3267a7-206d-4e47-b154-a7f17a492684] Running
	I0425 18:54:03.303456   24262 system_pods.go:89] "kube-vip-ha-912667-m02" [c0622f7e-0264-4168-b510-7563083cc9d3] Running
	I0425 18:54:03.303459   24262 system_pods.go:89] "kube-vip-ha-912667-m03" [206ce495-8d7a-404d-ba1a-34edfa189d10] Running
	I0425 18:54:03.303465   24262 system_pods.go:89] "storage-provisioner" [f3a0b111-609d-49b3-a056-71eb4b641224] Running
	I0425 18:54:03.303470   24262 system_pods.go:126] duration metric: took 211.992421ms to wait for k8s-apps to be running ...
	I0425 18:54:03.303477   24262 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 18:54:03.303518   24262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 18:54:03.320069   24262 system_svc.go:56] duration metric: took 16.581113ms WaitForService to wait for kubelet
	I0425 18:54:03.320104   24262 kubeadm.go:576] duration metric: took 16.079199643s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 18:54:03.320125   24262 node_conditions.go:102] verifying NodePressure condition ...
	I0425 18:54:03.487802   24262 request.go:629] Waited for 167.588279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.189:8443/api/v1/nodes
	I0425 18:54:03.487856   24262 round_trippers.go:463] GET https://192.168.39.189:8443/api/v1/nodes
	I0425 18:54:03.487862   24262 round_trippers.go:469] Request Headers:
	I0425 18:54:03.487873   24262 round_trippers.go:473]     Accept: application/json, */*
	I0425 18:54:03.487882   24262 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0425 18:54:03.492855   24262 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0425 18:54:03.494180   24262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:54:03.494200   24262 node_conditions.go:123] node cpu capacity is 2
	I0425 18:54:03.494222   24262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:54:03.494228   24262 node_conditions.go:123] node cpu capacity is 2
	I0425 18:54:03.494234   24262 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 18:54:03.494239   24262 node_conditions.go:123] node cpu capacity is 2
	I0425 18:54:03.494246   24262 node_conditions.go:105] duration metric: took 174.114337ms to run NodePressure ...
	I0425 18:54:03.494264   24262 start.go:240] waiting for startup goroutines ...
	I0425 18:54:03.494294   24262 start.go:254] writing updated cluster config ...
	I0425 18:54:03.494573   24262 ssh_runner.go:195] Run: rm -f paused
	I0425 18:54:03.545098   24262 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 18:54:03.547863   24262 out.go:177] * Done! kubectl is now configured to use "ha-912667" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.762018684Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a60e826-346f-42e5-bf22-247689ae2959 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.763273250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af0b5bb4-3ad3-4dde-b851-0bafd851f966 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.764209184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714071510764184956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af0b5bb4-3ad3-4dde-b851-0bafd851f966 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.765175970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f728ac8b-452b-4646-837e-f196dbcfbe06 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.765226347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f728ac8b-452b-4646-837e-f196dbcfbe06 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.765463648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071248602377761,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e68b1816950df1006cebe8ba8db228e4e894845505ce347266259b3e578daa,PodSandboxId:7f6b143ce4ab2496004c7e5c543759e65ce5ab68f51036cc9424cfd815f8b89f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071035239874404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034742556547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034727843572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-53
50-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47cf3b242de510131bcf58c4eead7934b5a457fa0fd6dc02c0376efb92cbd562,PodSandboxId:f26340b588292da1834879078cdffa8cf368a5c6832c6c9592659eaa2df3cc69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17140710
32863913405,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071032735256490,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24e946cc9871d59976b6e84efd38da336416d3442e75673080a8e5eb92ed6d4,PodSandboxId:d178c1dd267a0a71baecb334e62c5374a33e11b56ca0eed9f3aa0842d1a38ef7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071015803981933,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e92de90328c0d5bf0b78a6487dd065,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071012727880319,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98,PodSandboxId:7e20b6240b0cfc83339d367844cb1a47456b01ad53b8c97f3164eea50b34e875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071012693991926,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071012719200136,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353,PodSandboxId:73c1b7bec4c78211248abec36ca14f9fdf1fec9bf80bd4e86fa940f45b3ed05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071012685351732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f728ac8b-452b-4646-837e-f196dbcfbe06 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.819106975Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a2d7594-f368-4363-b41c-b322a894feda name=/runtime.v1.RuntimeService/Version
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.819247782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a2d7594-f368-4363-b41c-b322a894feda name=/runtime.v1.RuntimeService/Version
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.822522902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b54243ee-3b13-4277-8ed5-aac7a66a0be8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.823249874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714071510823217459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b54243ee-3b13-4277-8ed5-aac7a66a0be8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.824008714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51141f40-1c34-4a64-b394-0a60168b724e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.824093271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51141f40-1c34-4a64-b394-0a60168b724e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.824338666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071248602377761,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e68b1816950df1006cebe8ba8db228e4e894845505ce347266259b3e578daa,PodSandboxId:7f6b143ce4ab2496004c7e5c543759e65ce5ab68f51036cc9424cfd815f8b89f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071035239874404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034742556547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034727843572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-53
50-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47cf3b242de510131bcf58c4eead7934b5a457fa0fd6dc02c0376efb92cbd562,PodSandboxId:f26340b588292da1834879078cdffa8cf368a5c6832c6c9592659eaa2df3cc69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17140710
32863913405,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071032735256490,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24e946cc9871d59976b6e84efd38da336416d3442e75673080a8e5eb92ed6d4,PodSandboxId:d178c1dd267a0a71baecb334e62c5374a33e11b56ca0eed9f3aa0842d1a38ef7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071015803981933,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e92de90328c0d5bf0b78a6487dd065,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071012727880319,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98,PodSandboxId:7e20b6240b0cfc83339d367844cb1a47456b01ad53b8c97f3164eea50b34e875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071012693991926,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071012719200136,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353,PodSandboxId:73c1b7bec4c78211248abec36ca14f9fdf1fec9bf80bd4e86fa940f45b3ed05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071012685351732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51141f40-1c34-4a64-b394-0a60168b724e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.878063968Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eeb216e1-2bdb-4965-bb1b-26406961c5d9 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.878167291Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eeb216e1-2bdb-4965-bb1b-26406961c5d9 name=/runtime.v1.RuntimeService/Version
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.881470463Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=327d82b6-a905-4a59-9351-c25fa8058fc6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.881803425Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-nxhjn,Uid:eb1062c1-8c87-4e99-80a2-a114d2e0c709,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714071245844042336,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:54:04.632127009Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7f6b143ce4ab2496004c7e5c543759e65ce5ab68f51036cc9424cfd815f8b89f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f3a0b111-609d-49b3-a056-71eb4b641224,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1714071035104234362,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-25T18:50:34.789496361Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-22wvx,Uid:56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714071034500135496,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:50:34.187001157Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-h4s2h,Uid:f9e2233c-5350-47ab-bdae-6fa35972b601,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1714071034490934779,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:50:34.183235442Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&PodSandboxMetadata{Name:kube-proxy-mkgv5,Uid:7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714071032360561767,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-04-25T18:50:31.103570496Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f26340b588292da1834879078cdffa8cf368a5c6832c6c9592659eaa2df3cc69,Metadata:&PodSandboxMetadata{Name:kindnet-xlvjt,Uid:191ff28e-07d7-459e-afe5-e3d8c23e1016,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714071032346613602,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T18:50:31.117130783Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&PodSandboxMetadata{Name:etcd-ha-912667,Uid:f63dc5c47bed909879d47a4fe5ebbb9a,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1714071012470607450,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.189:2379,kubernetes.io/config.hash: f63dc5c47bed909879d47a4fe5ebbb9a,kubernetes.io/config.seen: 2024-04-25T18:50:11.967596863Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d178c1dd267a0a71baecb334e62c5374a33e11b56ca0eed9f3aa0842d1a38ef7,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-912667,Uid:b4e92de90328c0d5bf0b78a6487dd065,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714071012468978904,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e92de90328c0d5bf0b78a6487dd065,},Annotations:m
ap[string]string{kubernetes.io/config.hash: b4e92de90328c0d5bf0b78a6487dd065,kubernetes.io/config.seen: 2024-04-25T18:50:11.967604187Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-912667,Uid:92d273ee11723a3e0ac3b49ca2112419,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714071012449552555,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 92d273ee11723a3e0ac3b49ca2112419,kubernetes.io/config.seen: 2024-04-25T18:50:11.967603480Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:73c1b7bec4c78211248abec36ca14f9fdf1fec9bf80bd4e86fa940f45b3ed05e,Metadata:&PodSandboxMetadata{Name:kube-a
piserver-ha-912667,Uid:3ef9d6e5decdc8ee65e0e74c73411380,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714071012446884068,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.189:8443,kubernetes.io/config.hash: 3ef9d6e5decdc8ee65e0e74c73411380,kubernetes.io/config.seen: 2024-04-25T18:50:11.967601396Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7e20b6240b0cfc83339d367844cb1a47456b01ad53b8c97f3164eea50b34e875,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-912667,Uid:0f8eae540ae6f75803c1cce277c135c8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714071012430304723,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0f8eae540ae6f75803c1cce277c135c8,kubernetes.io/config.seen: 2024-04-25T18:50:11.967602607Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=327d82b6-a905-4a59-9351-c25fa8058fc6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.882505137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ec18ca6-ddaa-447e-becd-ddbd8929ad4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.882587945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ec18ca6-ddaa-447e-becd-ddbd8929ad4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.883576779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071248602377761,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e68b1816950df1006cebe8ba8db228e4e894845505ce347266259b3e578daa,PodSandboxId:7f6b143ce4ab2496004c7e5c543759e65ce5ab68f51036cc9424cfd815f8b89f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071035239874404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034742556547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034727843572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-53
50-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47cf3b242de510131bcf58c4eead7934b5a457fa0fd6dc02c0376efb92cbd562,PodSandboxId:f26340b588292da1834879078cdffa8cf368a5c6832c6c9592659eaa2df3cc69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17140710
32863913405,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071032735256490,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24e946cc9871d59976b6e84efd38da336416d3442e75673080a8e5eb92ed6d4,PodSandboxId:d178c1dd267a0a71baecb334e62c5374a33e11b56ca0eed9f3aa0842d1a38ef7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071015803981933,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e92de90328c0d5bf0b78a6487dd065,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071012727880319,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98,PodSandboxId:7e20b6240b0cfc83339d367844cb1a47456b01ad53b8c97f3164eea50b34e875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071012693991926,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071012719200136,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353,PodSandboxId:73c1b7bec4c78211248abec36ca14f9fdf1fec9bf80bd4e86fa940f45b3ed05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071012685351732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ec18ca6-ddaa-447e-becd-ddbd8929ad4d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.885030074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f85cf1ec-e2b3-4039-b6da-720b64338d08 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.885536781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714071510885513447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f85cf1ec-e2b3-4039-b6da-720b64338d08 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.886230888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=523251a6-d775-4702-89be-d43f3396c4f1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.886325281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=523251a6-d775-4702-89be-d43f3396c4f1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 18:58:30 ha-912667 crio[681]: time="2024-04-25 18:58:30.886582727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071248602377761,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38e68b1816950df1006cebe8ba8db228e4e894845505ce347266259b3e578daa,PodSandboxId:7f6b143ce4ab2496004c7e5c543759e65ce5ab68f51036cc9424cfd815f8b89f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071035239874404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034742556547,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071034727843572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-53
50-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47cf3b242de510131bcf58c4eead7934b5a457fa0fd6dc02c0376efb92cbd562,PodSandboxId:f26340b588292da1834879078cdffa8cf368a5c6832c6c9592659eaa2df3cc69,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17140710
32863913405,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071032735256490,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e24e946cc9871d59976b6e84efd38da336416d3442e75673080a8e5eb92ed6d4,PodSandboxId:d178c1dd267a0a71baecb334e62c5374a33e11b56ca0eed9f3aa0842d1a38ef7,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071015803981933,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e92de90328c0d5bf0b78a6487dd065,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071012727880319,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98,PodSandboxId:7e20b6240b0cfc83339d367844cb1a47456b01ad53b8c97f3164eea50b34e875,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071012693991926,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071012719200136,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353,PodSandboxId:73c1b7bec4c78211248abec36ca14f9fdf1fec9bf80bd4e86fa940f45b3ed05e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071012685351732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=523251a6-d775-4702-89be-d43f3396c4f1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cb806d6102b91       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   4a7d7ef3e980e       busybox-fc5497c4f-nxhjn
	38e68b1816950       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   7f6b143ce4ab2       storage-provisioner
	5b5e973107f16       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   5f41aaba12a45       coredns-7db6d8ff4d-22wvx
	877510603b828       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   7eff20f80efe1       coredns-7db6d8ff4d-h4s2h
	47cf3b242de51       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   f26340b588292       kindnet-xlvjt
	35f0443a12a2f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago       Running             kube-proxy                0                   56d2b6ff099a0       kube-proxy-mkgv5
	e24e946cc9871       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     8 minutes ago       Running             kube-vip                  0                   d178c1dd267a0       kube-vip-ha-912667
	6d0da8d06f797       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      8 minutes ago       Running             kube-scheduler            0                   10902ac1c9f4f       kube-scheduler-ha-912667
	860c8d827dba6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   b27e008a10a06       etcd-ha-912667
	9c0bd11b87eb3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      8 minutes ago       Running             kube-controller-manager   0                   7e20b6240b0cf       kube-controller-manager-ha-912667
	8ab9c0712a08a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      8 minutes ago       Running             kube-apiserver            0                   73c1b7bec4c78       kube-apiserver-ha-912667
	
	
	==> coredns [5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50659 - 8179 "HINFO IN 4082603258215062617.8291093497106509912. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013059871s
	[INFO] 10.244.2.2:40968 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.005406616s
	[INFO] 10.244.2.2:35686 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.005142825s
	[INFO] 10.244.0.4:32831 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001738929s
	[INFO] 10.244.1.2:38408 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00017538s
	[INFO] 10.244.2.2:37503 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003970142s
	[INFO] 10.244.2.2:40887 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218678s
	[INFO] 10.244.0.4:49981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001952122s
	[INFO] 10.244.0.4:56986 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183129s
	[INFO] 10.244.0.4:33316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126163s
	[INFO] 10.244.1.2:34817 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000365634s
	[INFO] 10.244.1.2:38909 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001350261s
	[INFO] 10.244.1.2:51802 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101088s
	[INFO] 10.244.2.2:47175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020899s
	[INFO] 10.244.2.2:46654 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000319039s
	[INFO] 10.244.2.2:36020 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135369s
	[INFO] 10.244.1.2:58245 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248988s
	[INFO] 10.244.1.2:45237 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202978s
	[INFO] 10.244.0.4:52108 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149798s
	[INFO] 10.244.0.4:52793 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093152s
	[INFO] 10.244.1.2:57128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187429s
	[INFO] 10.244.1.2:40536 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186246s
	[INFO] 10.244.1.2:52690 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120066s
	
	
	==> coredns [877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275] <==
	[INFO] 10.244.2.2:46440 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000251173s
	[INFO] 10.244.0.4:46858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189598s
	[INFO] 10.244.0.4:39745 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154962s
	[INFO] 10.244.0.4:50677 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098624s
	[INFO] 10.244.0.4:47040 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001411651s
	[INFO] 10.244.0.4:51578 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122143s
	[INFO] 10.244.1.2:40259 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165953s
	[INFO] 10.244.1.2:39729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001829607s
	[INFO] 10.244.1.2:34733 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172404s
	[INFO] 10.244.1.2:45725 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129433s
	[INFO] 10.244.1.2:35820 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133249s
	[INFO] 10.244.2.2:40405 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00168841s
	[INFO] 10.244.0.4:40751 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000295717s
	[INFO] 10.244.0.4:35528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102349s
	[INFO] 10.244.0.4:36374 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00035359s
	[INFO] 10.244.0.4:51732 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098091s
	[INFO] 10.244.1.2:41291 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000329271s
	[INFO] 10.244.1.2:36756 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159777s
	[INFO] 10.244.2.2:54364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000374806s
	[INFO] 10.244.2.2:35469 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0003009s
	[INFO] 10.244.2.2:57557 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000412395s
	[INFO] 10.244.2.2:55375 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188342s
	[INFO] 10.244.0.4:50283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136579s
	[INFO] 10.244.0.4:60253 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000062518s
	[INFO] 10.244.1.2:48368 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000591883s
	
	
	==> describe nodes <==
	Name:               ha-912667
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T18_50_19_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:50:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 18:58:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 18:54:23 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 18:54:23 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 18:54:23 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 18:54:23 +0000   Thu, 25 Apr 2024 18:50:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    ha-912667
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3a8edadaa67460ebdc313c0c3e1c3f7
	  System UUID:                a3a8edad-aa67-460e-bdc3-13c0c3e1c3f7
	  Boot ID:                    dc005c29-5a5e-4df7-8967-c057d8b3aa0a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nxhjn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 coredns-7db6d8ff4d-22wvx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m59s
	  kube-system                 coredns-7db6d8ff4d-h4s2h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m59s
	  kube-system                 etcd-ha-912667                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m13s
	  kube-system                 kindnet-xlvjt                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m
	  kube-system                 kube-apiserver-ha-912667             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-controller-manager-ha-912667    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-proxy-mkgv5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	  kube-system                 kube-scheduler-ha-912667             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-vip-ha-912667                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m58s                  kube-proxy       
	  Normal  Starting                 8m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     8m19s (x7 over 8m20s)  kubelet          Node ha-912667 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m19s (x8 over 8m20s)  kubelet          Node ha-912667 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m19s (x8 over 8m20s)  kubelet          Node ha-912667 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m13s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m13s                  kubelet          Node ha-912667 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m13s                  kubelet          Node ha-912667 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m13s                  kubelet          Node ha-912667 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m1s                   node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal  NodeReady                7m57s                  kubelet          Node ha-912667 status is now: NodeReady
	  Normal  RegisteredNode           5m43s                  node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal  RegisteredNode           4m30s                  node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	
	
	Name:               ha-912667-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T18_52_33_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:52:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 18:55:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 25 Apr 2024 18:54:33 +0000   Thu, 25 Apr 2024 18:55:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 25 Apr 2024 18:54:33 +0000   Thu, 25 Apr 2024 18:55:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 25 Apr 2024 18:54:33 +0000   Thu, 25 Apr 2024 18:55:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 25 Apr 2024 18:54:33 +0000   Thu, 25 Apr 2024 18:55:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.66
	  Hostname:    ha-912667-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82894439088e4cc98841c062c296fef3
	  System UUID:                82894439-088e-4cc9-8841-c062c296fef3
	  Boot ID:                    a05283e8-2146-4bc2-bd15-7ae5e2b51bec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tcxzk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 etcd-ha-912667-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m59s
	  kube-system                 kindnet-sq4lb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m1s
	  kube-system                 kube-apiserver-ha-912667-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-controller-manager-ha-912667-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-proxy-rkbcp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-scheduler-ha-912667-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-vip-ha-912667-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m55s                kube-proxy       
	  Normal  RegisteredNode           6m1s                 node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  NodeHasSufficientMemory  6m1s (x8 over 6m1s)  kubelet          Node ha-912667-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)  kubelet          Node ha-912667-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x7 over 6m1s)  kubelet          Node ha-912667-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m43s                node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  RegisteredNode           4m30s                node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  NodeNotReady             2m35s                node-controller  Node ha-912667-m02 status is now: NodeNotReady
	
	
	Name:               ha-912667-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T18_53_46_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:53:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 18:58:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 18:54:13 +0000   Thu, 25 Apr 2024 18:53:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 18:54:13 +0000   Thu, 25 Apr 2024 18:53:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 18:54:13 +0000   Thu, 25 Apr 2024 18:53:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 18:54:13 +0000   Thu, 25 Apr 2024 18:53:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    ha-912667-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b314db0e66974911a4c3c03513ed8a46
	  System UUID:                b314db0e-6697-4911-a4c3-c03513ed8a46
	  Boot ID:                    00746489-af97-4229-a221-4ab46c60d093
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6lkjg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 etcd-ha-912667-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m47s
	  kube-system                 kindnet-gcbv6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m49s
	  kube-system                 kube-apiserver-ha-912667-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-controller-manager-ha-912667-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-proxy-9zxln                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-scheduler-ha-912667-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-vip-ha-912667-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m49s (x8 over 4m49s)  kubelet          Node ha-912667-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s (x8 over 4m49s)  kubelet          Node ha-912667-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s (x7 over 4m49s)  kubelet          Node ha-912667-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	  Normal  RegisteredNode           4m46s                  node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	  Normal  RegisteredNode           4m30s                  node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	
	
	Name:               ha-912667-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T18_54_45_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:54:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 18:58:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 18:55:15 +0000   Thu, 25 Apr 2024 18:54:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 18:55:15 +0000   Thu, 25 Apr 2024 18:54:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 18:55:15 +0000   Thu, 25 Apr 2024 18:54:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 18:55:15 +0000   Thu, 25 Apr 2024 18:54:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-912667-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6d1da6a42954aa3b31899cd270783aa
	  System UUID:                c6d1da6a-4295-4aa3-b318-99cd270783aa
	  Boot ID:                    1273025a-2c47-413b-acda-da649c6acca7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4l974       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m46s
	  kube-system                 kube-proxy-64vg4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m47s (x2 over 3m47s)  kubelet          Node ha-912667-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m47s (x2 over 3m47s)  kubelet          Node ha-912667-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s (x2 over 3m47s)  kubelet          Node ha-912667-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal  NodeReady                3m36s                  kubelet          Node ha-912667-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr25 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054310] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044068] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.656449] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.562589] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.723180] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr25 18:50] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.058108] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076447] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.197185] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.122034] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.313908] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.923241] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.067466] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.659823] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.462418] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.581179] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.076665] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.874397] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.005828] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f] <==
	{"level":"warn","ts":"2024-04-25T18:58:31.223826Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.23496Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.239108Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.248869Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.263296Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.276667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.287517Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.293431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.298908Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.312864Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.318863Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.320507Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.333494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.339637Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.343257Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.351363Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.35785Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.36536Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.369131Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.373421Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.379554Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.389982Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.397363Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.404458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T18:58:31.416979Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:58:31 up 8 min,  0 users,  load average: 0.26, 0.40, 0.23
	Linux ha-912667 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [47cf3b242de510131bcf58c4eead7934b5a457fa0fd6dc02c0376efb92cbd562] <==
	I0425 18:57:54.453815       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 18:58:04.466880       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 18:58:04.467021       1 main.go:227] handling current node
	I0425 18:58:04.467054       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 18:58:04.467074       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 18:58:04.467244       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0425 18:58:04.467285       1 main.go:250] Node ha-912667-m03 has CIDR [10.244.2.0/24] 
	I0425 18:58:04.467375       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 18:58:04.467395       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 18:58:14.486671       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 18:58:14.489667       1 main.go:227] handling current node
	I0425 18:58:14.489797       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 18:58:14.489831       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 18:58:14.490125       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0425 18:58:14.490161       1 main.go:250] Node ha-912667-m03 has CIDR [10.244.2.0/24] 
	I0425 18:58:14.490340       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 18:58:14.490361       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 18:58:24.501831       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 18:58:24.501902       1 main.go:227] handling current node
	I0425 18:58:24.501936       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 18:58:24.501945       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 18:58:24.502141       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0425 18:58:24.502193       1 main.go:250] Node ha-912667-m03 has CIDR [10.244.2.0/24] 
	I0425 18:58:24.502273       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 18:58:24.502320       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353] <==
	I0425 18:50:18.954492       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0425 18:50:18.972333       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0425 18:50:31.068906       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0425 18:50:31.818951       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0425 18:52:31.749227       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0425 18:52:31.749915       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0425 18:52:31.749805       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 146.53µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0425 18:52:31.751150       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0425 18:52:31.752523       1 timeout.go:142] post-timeout activity - time-elapsed: 3.455121ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0425 18:54:10.886575       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51404: use of closed network connection
	E0425 18:54:11.143229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51422: use of closed network connection
	E0425 18:54:11.380769       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51434: use of closed network connection
	E0425 18:54:11.608552       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51456: use of closed network connection
	E0425 18:54:11.822446       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51472: use of closed network connection
	E0425 18:54:12.039290       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51494: use of closed network connection
	E0425 18:54:12.261989       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51512: use of closed network connection
	E0425 18:54:12.479460       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51528: use of closed network connection
	E0425 18:54:12.705049       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51540: use of closed network connection
	E0425 18:54:13.065495       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51584: use of closed network connection
	E0425 18:54:13.298958       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51606: use of closed network connection
	E0425 18:54:13.532622       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51630: use of closed network connection
	E0425 18:54:13.738200       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51638: use of closed network connection
	E0425 18:54:13.956410       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51664: use of closed network connection
	E0425 18:54:14.174400       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51692: use of closed network connection
	W0425 18:55:27.545284       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.179 192.168.39.189]
	
	
	==> kube-controller-manager [9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98] <==
	I0425 18:53:42.537776       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-912667-m03" podCIDRs=["10.244.2.0/24"]
	I0425 18:53:45.998575       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-912667-m03"
	I0425 18:54:04.582253       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128.844127ms"
	I0425 18:54:04.740175       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="157.28527ms"
	I0425 18:54:04.930996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="190.747285ms"
	I0425 18:54:04.953877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.546221ms"
	I0425 18:54:04.953987       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.88µs"
	I0425 18:54:05.920469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.67µs"
	I0425 18:54:05.938237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.17µs"
	I0425 18:54:05.948058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.15µs"
	I0425 18:54:08.758524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.970909ms"
	I0425 18:54:08.758846       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.843µs"
	I0425 18:54:08.952131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.477937ms"
	I0425 18:54:08.952325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="109.172µs"
	I0425 18:54:10.391181       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.077782ms"
	I0425 18:54:10.391466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.189µs"
	E0425 18:54:44.924095       1 certificate_controller.go:146] Sync csr-k8grv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-k8grv": the object has been modified; please apply your changes to the latest version and try again
	E0425 18:54:44.924380       1 certificate_controller.go:146] Sync csr-k8grv failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-k8grv": the object has been modified; please apply your changes to the latest version and try again
	I0425 18:54:45.200886       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-912667-m04\" does not exist"
	I0425 18:54:45.242043       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-912667-m04" podCIDRs=["10.244.3.0/24"]
	I0425 18:54:46.029288       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-912667-m04"
	I0425 18:54:55.658622       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-912667-m04"
	I0425 18:55:56.074474       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-912667-m04"
	I0425 18:55:56.177160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.526494ms"
	I0425 18:55:56.177994       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.755µs"
	
	
	==> kube-proxy [35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386] <==
	I0425 18:50:33.066573       1 server_linux.go:69] "Using iptables proxy"
	I0425 18:50:33.092210       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.189"]
	I0425 18:50:33.176956       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 18:50:33.177064       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 18:50:33.177082       1 server_linux.go:165] "Using iptables Proxier"
	I0425 18:50:33.181211       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 18:50:33.181406       1 server.go:872] "Version info" version="v1.30.0"
	I0425 18:50:33.181417       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 18:50:33.183895       1 config.go:192] "Starting service config controller"
	I0425 18:50:33.183931       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 18:50:33.183950       1 config.go:101] "Starting endpoint slice config controller"
	I0425 18:50:33.183954       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 18:50:33.184523       1 config.go:319] "Starting node config controller"
	I0425 18:50:33.184529       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 18:50:33.284779       1 shared_informer.go:320] Caches are synced for node config
	I0425 18:50:33.284935       1 shared_informer.go:320] Caches are synced for service config
	I0425 18:50:33.285001       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5] <==
	E0425 18:54:04.525141       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod def1a9f3-c061-480c-9644-abd5c6c37078(default/busybox-fc5497c4f-6lkjg) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-6lkjg"
	E0425 18:54:04.525245       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6lkjg\": pod busybox-fc5497c4f-6lkjg is already assigned to node \"ha-912667-m03\"" pod="default/busybox-fc5497c4f-6lkjg"
	I0425 18:54:04.525314       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-6lkjg" node="ha-912667-m03"
	E0425 18:54:45.301209       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dx5dw\": pod kube-proxy-dx5dw is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dx5dw" node="ha-912667-m04"
	E0425 18:54:45.302601       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dx5dw\": pod kube-proxy-dx5dw is already assigned to node \"ha-912667-m04\"" pod="kube-system/kube-proxy-dx5dw"
	E0425 18:54:45.317163       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4l974\": pod kindnet-4l974 is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4l974" node="ha-912667-m04"
	E0425 18:54:45.322558       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 186c0056-6cc0-4696-b1ed-4d5013b794f6(kube-system/kindnet-4l974) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4l974"
	E0425 18:54:45.326251       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4l974\": pod kindnet-4l974 is already assigned to node \"ha-912667-m04\"" pod="kube-system/kindnet-4l974"
	I0425 18:54:45.326657       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4l974" node="ha-912667-m04"
	E0425 18:54:45.359237       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8dczp\": pod kindnet-8dczp is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-8dczp" node="ha-912667-m04"
	E0425 18:54:45.359625       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1615b53c-82a1-4989-8a5c-73d1ece27d1d(kube-system/kindnet-8dczp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-8dczp"
	E0425 18:54:45.359841       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8dczp\": pod kindnet-8dczp is already assigned to node \"ha-912667-m04\"" pod="kube-system/kindnet-8dczp"
	I0425 18:54:45.359969       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8dczp" node="ha-912667-m04"
	E0425 18:54:45.370471       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6fpnz\": pod kube-proxy-6fpnz is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6fpnz" node="ha-912667-m04"
	E0425 18:54:45.371240       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 50f143aa-15a7-468d-a01b-80259f6b5d9f(kube-system/kube-proxy-6fpnz) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-6fpnz"
	E0425 18:54:45.371330       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6fpnz\": pod kube-proxy-6fpnz is already assigned to node \"ha-912667-m04\"" pod="kube-system/kube-proxy-6fpnz"
	I0425 18:54:45.371405       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-6fpnz" node="ha-912667-m04"
	E0425 18:54:45.423818       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tdqkk\": pod kindnet-tdqkk is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tdqkk" node="ha-912667-m04"
	E0425 18:54:45.427112       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 75fe41f7-fcc1-4042-b309-50d32525a2aa(kube-system/kindnet-tdqkk) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tdqkk"
	E0425 18:54:45.427396       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tdqkk\": pod kindnet-tdqkk is already assigned to node \"ha-912667-m04\"" pod="kube-system/kindnet-tdqkk"
	I0425 18:54:45.427574       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tdqkk" node="ha-912667-m04"
	E0425 18:54:45.446814       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-55svm\": pod kube-proxy-55svm is already assigned to node \"ha-912667-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-55svm" node="ha-912667-m04"
	E0425 18:54:45.447116       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 29859480-e924-4cec-bc56-f342570ee22a(kube-system/kube-proxy-55svm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-55svm"
	E0425 18:54:45.447223       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-55svm\": pod kube-proxy-55svm is already assigned to node \"ha-912667-m04\"" pod="kube-system/kube-proxy-55svm"
	I0425 18:54:45.447369       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-55svm" node="ha-912667-m04"
	
	
	==> kubelet <==
	Apr 25 18:54:18 ha-912667 kubelet[1386]: E0425 18:54:18.915224    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:54:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:54:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:54:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:54:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 18:55:18 ha-912667 kubelet[1386]: E0425 18:55:18.915306    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:55:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:55:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:55:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:55:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 18:56:18 ha-912667 kubelet[1386]: E0425 18:56:18.917242    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:56:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:56:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:56:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:56:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 18:57:18 ha-912667 kubelet[1386]: E0425 18:57:18.913233    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:57:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:57:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:57:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:57:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 18:58:18 ha-912667 kubelet[1386]: E0425 18:58:18.913989    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 18:58:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 18:58:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 18:58:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 18:58:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-912667 -n ha-912667
helpers_test.go:261: (dbg) Run:  kubectl --context ha-912667 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (372.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-912667 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-912667 -v=7 --alsologtostderr
E0425 18:58:36.328243   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:59:04.012434   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-912667 -v=7 --alsologtostderr: exit status 82 (2m2.737230537s)

                                                
                                                
-- stdout --
	* Stopping node "ha-912667-m04"  ...
	* Stopping node "ha-912667-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:58:32.976061   30206 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:58:32.976167   30206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:58:32.976175   30206 out.go:304] Setting ErrFile to fd 2...
	I0425 18:58:32.976179   30206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:58:32.976375   30206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:58:32.976623   30206 out.go:298] Setting JSON to false
	I0425 18:58:32.976706   30206 mustload.go:65] Loading cluster: ha-912667
	I0425 18:58:32.977058   30206 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:58:32.977148   30206 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 18:58:32.977319   30206 mustload.go:65] Loading cluster: ha-912667
	I0425 18:58:32.977449   30206 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:58:32.977473   30206 stop.go:39] StopHost: ha-912667-m04
	I0425 18:58:32.977859   30206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:32.977933   30206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:32.992993   30206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43767
	I0425 18:58:32.993484   30206 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:32.994084   30206 main.go:141] libmachine: Using API Version  1
	I0425 18:58:32.994111   30206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:32.994431   30206 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:32.997139   30206 out.go:177] * Stopping node "ha-912667-m04"  ...
	I0425 18:58:32.998408   30206 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0425 18:58:32.998445   30206 main.go:141] libmachine: (ha-912667-m04) Calling .DriverName
	I0425 18:58:32.998695   30206 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0425 18:58:32.998723   30206 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHHostname
	I0425 18:58:33.001751   30206 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:33.002196   30206 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:54:31 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 18:58:33.002243   30206 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 18:58:33.002407   30206 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHPort
	I0425 18:58:33.002625   30206 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHKeyPath
	I0425 18:58:33.002779   30206 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHUsername
	I0425 18:58:33.002930   30206 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m04/id_rsa Username:docker}
	I0425 18:58:33.094059   30206 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0425 18:58:33.149364   30206 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0425 18:58:33.207707   30206 main.go:141] libmachine: Stopping "ha-912667-m04"...
	I0425 18:58:33.207739   30206 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 18:58:33.209406   30206 main.go:141] libmachine: (ha-912667-m04) Calling .Stop
	I0425 18:58:33.212968   30206 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 0/120
	I0425 18:58:34.214630   30206 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 1/120
	I0425 18:58:35.217401   30206 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 18:58:35.218822   30206 main.go:141] libmachine: Machine "ha-912667-m04" was stopped.
	I0425 18:58:35.218839   30206 stop.go:75] duration metric: took 2.220435285s to stop
	I0425 18:58:35.218858   30206 stop.go:39] StopHost: ha-912667-m03
	I0425 18:58:35.219140   30206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:58:35.219183   30206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:58:35.233961   30206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35645
	I0425 18:58:35.234419   30206 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:58:35.234910   30206 main.go:141] libmachine: Using API Version  1
	I0425 18:58:35.234929   30206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:58:35.235237   30206 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:58:35.237229   30206 out.go:177] * Stopping node "ha-912667-m03"  ...
	I0425 18:58:35.238475   30206 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0425 18:58:35.238500   30206 main.go:141] libmachine: (ha-912667-m03) Calling .DriverName
	I0425 18:58:35.238710   30206 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0425 18:58:35.238737   30206 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHHostname
	I0425 18:58:35.241720   30206 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:35.242166   30206 main.go:141] libmachine: (ha-912667-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:3e:7a", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:53:07 +0000 UTC Type:0 Mac:52:54:00:fb:3e:7a Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:ha-912667-m03 Clientid:01:52:54:00:fb:3e:7a}
	I0425 18:58:35.242195   30206 main.go:141] libmachine: (ha-912667-m03) DBG | domain ha-912667-m03 has defined IP address 192.168.39.179 and MAC address 52:54:00:fb:3e:7a in network mk-ha-912667
	I0425 18:58:35.242347   30206 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHPort
	I0425 18:58:35.242528   30206 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHKeyPath
	I0425 18:58:35.242688   30206 main.go:141] libmachine: (ha-912667-m03) Calling .GetSSHUsername
	I0425 18:58:35.242852   30206 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m03/id_rsa Username:docker}
	I0425 18:58:35.330480   30206 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0425 18:58:35.385978   30206 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0425 18:58:35.445040   30206 main.go:141] libmachine: Stopping "ha-912667-m03"...
	I0425 18:58:35.445073   30206 main.go:141] libmachine: (ha-912667-m03) Calling .GetState
	I0425 18:58:35.446698   30206 main.go:141] libmachine: (ha-912667-m03) Calling .Stop
	I0425 18:58:35.449969   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 0/120
	I0425 18:58:36.451820   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 1/120
	I0425 18:58:37.453200   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 2/120
	I0425 18:58:38.454705   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 3/120
	I0425 18:58:39.456159   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 4/120
	I0425 18:58:40.457737   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 5/120
	I0425 18:58:41.459320   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 6/120
	I0425 18:58:42.460686   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 7/120
	I0425 18:58:43.462324   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 8/120
	I0425 18:58:44.463715   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 9/120
	I0425 18:58:45.465963   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 10/120
	I0425 18:58:46.467376   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 11/120
	I0425 18:58:47.469616   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 12/120
	I0425 18:58:48.471230   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 13/120
	I0425 18:58:49.472801   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 14/120
	I0425 18:58:50.474337   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 15/120
	I0425 18:58:51.475651   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 16/120
	I0425 18:58:52.476955   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 17/120
	I0425 18:58:53.478438   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 18/120
	I0425 18:58:54.480113   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 19/120
	I0425 18:58:55.482699   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 20/120
	I0425 18:58:56.484929   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 21/120
	I0425 18:58:57.486517   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 22/120
	I0425 18:58:58.488576   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 23/120
	I0425 18:58:59.490191   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 24/120
	I0425 18:59:00.492077   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 25/120
	I0425 18:59:01.493487   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 26/120
	I0425 18:59:02.494820   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 27/120
	I0425 18:59:03.496551   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 28/120
	I0425 18:59:04.497860   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 29/120
	I0425 18:59:05.499607   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 30/120
	I0425 18:59:06.501224   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 31/120
	I0425 18:59:07.502691   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 32/120
	I0425 18:59:08.504089   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 33/120
	I0425 18:59:09.505504   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 34/120
	I0425 18:59:10.507200   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 35/120
	I0425 18:59:11.508529   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 36/120
	I0425 18:59:12.509815   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 37/120
	I0425 18:59:13.511413   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 38/120
	I0425 18:59:14.512755   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 39/120
	I0425 18:59:15.514715   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 40/120
	I0425 18:59:16.516757   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 41/120
	I0425 18:59:17.518018   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 42/120
	I0425 18:59:18.519400   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 43/120
	I0425 18:59:19.520719   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 44/120
	I0425 18:59:20.522584   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 45/120
	I0425 18:59:21.523849   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 46/120
	I0425 18:59:22.525186   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 47/120
	I0425 18:59:23.526554   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 48/120
	I0425 18:59:24.528902   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 49/120
	I0425 18:59:25.531395   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 50/120
	I0425 18:59:26.532766   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 51/120
	I0425 18:59:27.534606   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 52/120
	I0425 18:59:28.535973   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 53/120
	I0425 18:59:29.538170   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 54/120
	I0425 18:59:30.540420   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 55/120
	I0425 18:59:31.541749   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 56/120
	I0425 18:59:32.543164   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 57/120
	I0425 18:59:33.544660   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 58/120
	I0425 18:59:34.546327   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 59/120
	I0425 18:59:35.548275   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 60/120
	I0425 18:59:36.549380   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 61/120
	I0425 18:59:37.550789   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 62/120
	I0425 18:59:38.552107   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 63/120
	I0425 18:59:39.553671   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 64/120
	I0425 18:59:40.555548   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 65/120
	I0425 18:59:41.557052   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 66/120
	I0425 18:59:42.558401   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 67/120
	I0425 18:59:43.560600   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 68/120
	I0425 18:59:44.562164   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 69/120
	I0425 18:59:45.563343   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 70/120
	I0425 18:59:46.564594   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 71/120
	I0425 18:59:47.566266   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 72/120
	I0425 18:59:48.567540   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 73/120
	I0425 18:59:49.568755   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 74/120
	I0425 18:59:50.570852   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 75/120
	I0425 18:59:51.572727   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 76/120
	I0425 18:59:52.574082   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 77/120
	I0425 18:59:53.575463   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 78/120
	I0425 18:59:54.577032   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 79/120
	I0425 18:59:55.578286   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 80/120
	I0425 18:59:56.579625   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 81/120
	I0425 18:59:57.580916   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 82/120
	I0425 18:59:58.582237   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 83/120
	I0425 18:59:59.583763   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 84/120
	I0425 19:00:00.585565   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 85/120
	I0425 19:00:01.587360   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 86/120
	I0425 19:00:02.589401   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 87/120
	I0425 19:00:03.590686   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 88/120
	I0425 19:00:04.592890   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 89/120
	I0425 19:00:05.595176   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 90/120
	I0425 19:00:06.596714   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 91/120
	I0425 19:00:07.598312   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 92/120
	I0425 19:00:08.599754   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 93/120
	I0425 19:00:09.601261   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 94/120
	I0425 19:00:10.602713   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 95/120
	I0425 19:00:11.604292   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 96/120
	I0425 19:00:12.605781   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 97/120
	I0425 19:00:13.607213   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 98/120
	I0425 19:00:14.609110   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 99/120
	I0425 19:00:15.611458   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 100/120
	I0425 19:00:16.612963   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 101/120
	I0425 19:00:17.614412   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 102/120
	I0425 19:00:18.615944   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 103/120
	I0425 19:00:19.617371   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 104/120
	I0425 19:00:20.619341   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 105/120
	I0425 19:00:21.620886   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 106/120
	I0425 19:00:22.622354   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 107/120
	I0425 19:00:23.623934   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 108/120
	I0425 19:00:24.625697   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 109/120
	I0425 19:00:25.627504   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 110/120
	I0425 19:00:26.628903   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 111/120
	I0425 19:00:27.630365   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 112/120
	I0425 19:00:28.632067   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 113/120
	I0425 19:00:29.633553   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 114/120
	I0425 19:00:30.635596   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 115/120
	I0425 19:00:31.637108   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 116/120
	I0425 19:00:32.638739   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 117/120
	I0425 19:00:33.640829   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 118/120
	I0425 19:00:34.642525   30206 main.go:141] libmachine: (ha-912667-m03) Waiting for machine to stop 119/120
	I0425 19:00:35.643852   30206 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0425 19:00:35.643918   30206 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0425 19:00:35.645790   30206 out.go:177] 
	W0425 19:00:35.647095   30206 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0425 19:00:35.647111   30206 out.go:239] * 
	* 
	W0425 19:00:35.649294   30206 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 19:00:35.651293   30206 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-912667 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-912667 --wait=true -v=7 --alsologtostderr
E0425 19:00:45.438839   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 19:02:08.486449   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 19:03:36.328132   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-912667 --wait=true -v=7 --alsologtostderr: (4m6.665558172s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-912667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-912667 -n ha-912667
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-912667 logs -n 25: (2.057787515s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m02:/home/docker/cp-test_ha-912667-m03_ha-912667-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m02 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m03_ha-912667-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04:/home/docker/cp-test_ha-912667-m03_ha-912667-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m04 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m03_ha-912667-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp testdata/cp-test.txt                                              | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile60710412/001/cp-test_ha-912667-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667:/home/docker/cp-test_ha-912667-m04_ha-912667.txt                     |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667 sudo cat                                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667.txt                               |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m02:/home/docker/cp-test_ha-912667-m04_ha-912667-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m02 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03:/home/docker/cp-test_ha-912667-m04_ha-912667-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m03 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-912667 node stop m02 -v=7                                                   | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-912667 node start m02 -v=7                                                  | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:57 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-912667 -v=7                                                         | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:58 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-912667 -v=7                                                              | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:58 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-912667 --wait=true -v=7                                                  | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 19:00 UTC | 25 Apr 24 19:04 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-912667                                                              | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 19:04 UTC |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:00:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:00:35.714252   30712 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:00:35.714369   30712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:00:35.714379   30712 out.go:304] Setting ErrFile to fd 2...
	I0425 19:00:35.714384   30712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:00:35.714602   30712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:00:35.715150   30712 out.go:298] Setting JSON to false
	I0425 19:00:35.716127   30712 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2582,"bootTime":1714069054,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:00:35.716188   30712 start.go:139] virtualization: kvm guest
	I0425 19:00:35.718896   30712 out.go:177] * [ha-912667] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:00:35.720707   30712 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:00:35.720692   30712 notify.go:220] Checking for updates...
	I0425 19:00:35.722721   30712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:00:35.724284   30712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:00:35.725817   30712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:00:35.727182   30712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:00:35.728662   30712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:00:35.730356   30712 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:00:35.730445   30712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:00:35.730817   30712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:00:35.730852   30712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:00:35.748209   30712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0425 19:00:35.748639   30712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:00:35.749107   30712 main.go:141] libmachine: Using API Version  1
	I0425 19:00:35.749125   30712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:00:35.749450   30712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:00:35.749620   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:00:35.783601   30712 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:00:35.785049   30712 start.go:297] selected driver: kvm2
	I0425 19:00:35.785063   30712 start.go:901] validating driver "kvm2" against &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.232 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:00:35.785250   30712 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:00:35.785768   30712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:00:35.785889   30712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:00:35.799984   30712 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:00:35.800891   30712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:00:35.800965   30712 cni.go:84] Creating CNI manager for ""
	I0425 19:00:35.800981   30712 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0425 19:00:35.801044   30712 start.go:340] cluster config:
	{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.232 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:00:35.801216   30712 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:00:35.803114   30712 out.go:177] * Starting "ha-912667" primary control-plane node in "ha-912667" cluster
	I0425 19:00:35.804518   30712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:00:35.804546   30712 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:00:35.804552   30712 cache.go:56] Caching tarball of preloaded images
	I0425 19:00:35.804628   30712 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:00:35.804643   30712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 19:00:35.804773   30712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 19:00:35.804953   30712 start.go:360] acquireMachinesLock for ha-912667: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:00:35.805000   30712 start.go:364] duration metric: took 31.437µs to acquireMachinesLock for "ha-912667"
	I0425 19:00:35.805014   30712 start.go:96] Skipping create...Using existing machine configuration
	I0425 19:00:35.805021   30712 fix.go:54] fixHost starting: 
	I0425 19:00:35.805256   30712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:00:35.805283   30712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:00:35.819008   30712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
	I0425 19:00:35.819412   30712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:00:35.819874   30712 main.go:141] libmachine: Using API Version  1
	I0425 19:00:35.819890   30712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:00:35.820150   30712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:00:35.820331   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:00:35.820453   30712 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 19:00:35.821988   30712 fix.go:112] recreateIfNeeded on ha-912667: state=Running err=<nil>
	W0425 19:00:35.822008   30712 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 19:00:35.824006   30712 out.go:177] * Updating the running kvm2 "ha-912667" VM ...
	I0425 19:00:35.825258   30712 machine.go:94] provisionDockerMachine start ...
	I0425 19:00:35.825278   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:00:35.825446   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:35.827950   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:35.828404   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:35.828431   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:35.828551   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:00:35.828720   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:35.828917   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:35.829038   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:00:35.829180   30712 main.go:141] libmachine: Using SSH client type: native
	I0425 19:00:35.829372   30712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 19:00:35.829387   30712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 19:00:35.945081   30712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-912667
	
	I0425 19:00:35.945101   30712 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 19:00:35.945336   30712 buildroot.go:166] provisioning hostname "ha-912667"
	I0425 19:00:35.945360   30712 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 19:00:35.945537   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:35.948199   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:35.948542   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:35.948575   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:35.948774   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:00:35.948935   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:35.949139   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:35.949265   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:00:35.949432   30712 main.go:141] libmachine: Using SSH client type: native
	I0425 19:00:35.949586   30712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 19:00:35.949598   30712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-912667 && echo "ha-912667" | sudo tee /etc/hostname
	I0425 19:00:36.072731   30712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-912667
	
	I0425 19:00:36.072760   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:36.075474   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.075793   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:36.075816   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.076045   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:00:36.076253   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:36.076421   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:36.076607   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:00:36.076784   30712 main.go:141] libmachine: Using SSH client type: native
	I0425 19:00:36.076945   30712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 19:00:36.076961   30712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-912667' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-912667/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-912667' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 19:00:36.187933   30712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:00:36.187965   30712 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 19:00:36.188013   30712 buildroot.go:174] setting up certificates
	I0425 19:00:36.188034   30712 provision.go:84] configureAuth start
	I0425 19:00:36.188056   30712 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 19:00:36.188394   30712 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 19:00:36.191154   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.191573   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:36.191609   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.191762   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:36.193980   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.194339   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:36.194363   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.194550   30712 provision.go:143] copyHostCerts
	I0425 19:00:36.194582   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:00:36.194624   30712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 19:00:36.194636   30712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:00:36.194716   30712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 19:00:36.194823   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:00:36.194849   30712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 19:00:36.194856   30712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:00:36.194899   30712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 19:00:36.194958   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:00:36.194989   30712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 19:00:36.194998   30712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:00:36.195031   30712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 19:00:36.195092   30712 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.ha-912667 san=[127.0.0.1 192.168.39.189 ha-912667 localhost minikube]
	I0425 19:00:36.404154   30712 provision.go:177] copyRemoteCerts
	I0425 19:00:36.404229   30712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 19:00:36.404255   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:36.406916   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.407260   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:36.407284   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.407436   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:00:36.407622   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:36.407782   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:00:36.407897   30712 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 19:00:36.493180   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 19:00:36.493259   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 19:00:36.522703   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 19:00:36.522805   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0425 19:00:36.552554   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 19:00:36.552639   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 19:00:36.581173   30712 provision.go:87] duration metric: took 393.12035ms to configureAuth
	I0425 19:00:36.581202   30712 buildroot.go:189] setting minikube options for container-runtime
	I0425 19:00:36.581439   30712 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:00:36.581534   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:36.583938   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.584312   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:36.584339   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.584545   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:00:36.584748   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:36.584928   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:36.585061   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:00:36.585212   30712 main.go:141] libmachine: Using SSH client type: native
	I0425 19:00:36.585382   30712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 19:00:36.585399   30712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 19:02:07.501252   30712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 19:02:07.501281   30712 machine.go:97] duration metric: took 1m31.676011283s to provisionDockerMachine
	I0425 19:02:07.501295   30712 start.go:293] postStartSetup for "ha-912667" (driver="kvm2")
	I0425 19:02:07.501307   30712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 19:02:07.501322   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.501668   30712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 19:02:07.501702   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:02:07.504671   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.505070   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.505096   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.505309   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:02:07.505509   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.505640   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:02:07.505760   30712 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 19:02:07.591385   30712 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 19:02:07.597458   30712 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 19:02:07.597484   30712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 19:02:07.597542   30712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 19:02:07.597606   30712 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 19:02:07.597617   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 19:02:07.597693   30712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 19:02:07.609135   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:02:07.635515   30712 start.go:296] duration metric: took 134.203777ms for postStartSetup
	I0425 19:02:07.635567   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.635886   30712 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0425 19:02:07.635918   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:02:07.638341   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.638711   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.638737   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.638889   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:02:07.639069   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.639186   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:02:07.639322   30712 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	W0425 19:02:07.722422   30712 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0425 19:02:07.722449   30712 fix.go:56] duration metric: took 1m31.917426388s for fixHost
	I0425 19:02:07.722474   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:02:07.724768   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.725163   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.725193   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.725300   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:02:07.725460   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.725610   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.725786   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:02:07.725951   30712 main.go:141] libmachine: Using SSH client type: native
	I0425 19:02:07.726114   30712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 19:02:07.726125   30712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 19:02:07.831970   30712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714071727.806375549
	
	I0425 19:02:07.831992   30712 fix.go:216] guest clock: 1714071727.806375549
	I0425 19:02:07.831999   30712 fix.go:229] Guest: 2024-04-25 19:02:07.806375549 +0000 UTC Remote: 2024-04-25 19:02:07.722458379 +0000 UTC m=+92.060875887 (delta=83.91717ms)
	I0425 19:02:07.832035   30712 fix.go:200] guest clock delta is within tolerance: 83.91717ms
	I0425 19:02:07.832040   30712 start.go:83] releasing machines lock for "ha-912667", held for 1m32.027031339s
	I0425 19:02:07.832059   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.832326   30712 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 19:02:07.835035   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.835420   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.835451   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.835569   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.836152   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.836348   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.836447   30712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 19:02:07.836488   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:02:07.836600   30712 ssh_runner.go:195] Run: cat /version.json
	I0425 19:02:07.836631   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:02:07.839030   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.839373   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.839401   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.839423   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.839525   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:02:07.839694   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.839841   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:02:07.839869   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.839900   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.839991   30712 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 19:02:07.840033   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:02:07.840178   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.840337   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:02:07.840477   30712 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 19:02:07.920227   30712 ssh_runner.go:195] Run: systemctl --version
	I0425 19:02:07.943706   30712 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 19:02:08.110896   30712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 19:02:08.120528   30712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 19:02:08.120601   30712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 19:02:08.132132   30712 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0425 19:02:08.132152   30712 start.go:494] detecting cgroup driver to use...
	I0425 19:02:08.132214   30712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 19:02:08.152138   30712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 19:02:08.167744   30712 docker.go:217] disabling cri-docker service (if available) ...
	I0425 19:02:08.167816   30712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 19:02:08.184986   30712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 19:02:08.202823   30712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 19:02:08.359055   30712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 19:02:08.514315   30712 docker.go:233] disabling docker service ...
	I0425 19:02:08.514379   30712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 19:02:08.532743   30712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 19:02:08.547170   30712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 19:02:08.700369   30712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 19:02:08.854817   30712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 19:02:08.871547   30712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 19:02:08.895252   30712 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 19:02:08.895340   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.907619   30712 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 19:02:08.907691   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.919696   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.931320   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.942616   30712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 19:02:08.954598   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.965787   30712 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.978477   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.989743   30712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 19:02:09.000038   30712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 19:02:09.009796   30712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:02:09.163791   30712 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 19:02:10.078544   30712 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 19:02:10.078620   30712 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 19:02:10.084969   30712 start.go:562] Will wait 60s for crictl version
	I0425 19:02:10.085047   30712 ssh_runner.go:195] Run: which crictl
	I0425 19:02:10.089776   30712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 19:02:10.140486   30712 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 19:02:10.140588   30712 ssh_runner.go:195] Run: crio --version
	I0425 19:02:10.173563   30712 ssh_runner.go:195] Run: crio --version
	I0425 19:02:10.209225   30712 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 19:02:10.210556   30712 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 19:02:10.213233   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:10.213577   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:10.213606   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:10.213810   30712 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 19:02:10.219074   30712 kubeadm.go:877] updating cluster {Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.232 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 19:02:10.219190   30712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:02:10.219226   30712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:02:10.269189   30712 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:02:10.269209   30712 crio.go:433] Images already preloaded, skipping extraction
	I0425 19:02:10.269256   30712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:02:10.307094   30712 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:02:10.307113   30712 cache_images.go:84] Images are preloaded, skipping loading
	I0425 19:02:10.307121   30712 kubeadm.go:928] updating node { 192.168.39.189 8443 v1.30.0 crio true true} ...
	I0425 19:02:10.307221   30712 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-912667 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 19:02:10.307287   30712 ssh_runner.go:195] Run: crio config
	I0425 19:02:10.364052   30712 cni.go:84] Creating CNI manager for ""
	I0425 19:02:10.364072   30712 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0425 19:02:10.364083   30712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 19:02:10.364102   30712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-912667 NodeName:ha-912667 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 19:02:10.364231   30712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-912667"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.189
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 19:02:10.364256   30712 kube-vip.go:111] generating kube-vip config ...
	I0425 19:02:10.364292   30712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0425 19:02:10.379175   30712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0425 19:02:10.379273   30712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0425 19:02:10.379320   30712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 19:02:10.390441   30712 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 19:02:10.390497   30712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0425 19:02:10.402548   30712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0425 19:02:10.422214   30712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 19:02:10.440272   30712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0425 19:02:10.458608   30712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0425 19:02:10.479035   30712 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0425 19:02:10.483541   30712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:02:10.649521   30712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:02:10.666543   30712 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667 for IP: 192.168.39.189
	I0425 19:02:10.666567   30712 certs.go:194] generating shared ca certs ...
	I0425 19:02:10.666588   30712 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:02:10.666764   30712 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 19:02:10.666838   30712 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 19:02:10.666851   30712 certs.go:256] generating profile certs ...
	I0425 19:02:10.666958   30712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key
	I0425 19:02:10.666995   30712 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.5d430312
	I0425 19:02:10.667011   30712 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.5d430312 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189 192.168.39.66 192.168.39.179 192.168.39.254]
	I0425 19:02:10.846879   30712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.5d430312 ...
	I0425 19:02:10.846911   30712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.5d430312: {Name:mk7d97a128946db98f43e52607d66bc2c3314779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:02:10.847075   30712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.5d430312 ...
	I0425 19:02:10.847087   30712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.5d430312: {Name:mk4287911b1bba38d86f72f1ea7d421bb210d31c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:02:10.847157   30712 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.5d430312 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt
	I0425 19:02:10.847310   30712 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.5d430312 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key
	I0425 19:02:10.847437   30712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key
	I0425 19:02:10.847451   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 19:02:10.847464   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 19:02:10.847479   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 19:02:10.847492   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 19:02:10.847506   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 19:02:10.847518   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 19:02:10.847528   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 19:02:10.847541   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 19:02:10.847584   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 19:02:10.847609   30712 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 19:02:10.847619   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 19:02:10.847651   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 19:02:10.847673   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 19:02:10.847692   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 19:02:10.847727   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:02:10.847755   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 19:02:10.847770   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 19:02:10.847782   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:02:10.848343   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 19:02:10.879752   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 19:02:10.910020   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 19:02:10.939462   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 19:02:10.968462   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0425 19:02:10.997075   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 19:02:11.026613   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 19:02:11.054919   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0425 19:02:11.083524   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 19:02:11.112094   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 19:02:11.137420   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 19:02:11.165135   30712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 19:02:11.184099   30712 ssh_runner.go:195] Run: openssl version
	I0425 19:02:11.190963   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 19:02:11.203003   30712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 19:02:11.208176   30712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 19:02:11.208228   30712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 19:02:11.215287   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 19:02:11.226501   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 19:02:11.239446   30712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 19:02:11.245052   30712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:02:11.245124   30712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 19:02:11.251698   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 19:02:11.263482   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 19:02:11.276255   30712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:02:11.282115   30712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:02:11.282183   30712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:02:11.289037   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 19:02:11.300467   30712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 19:02:11.306162   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 19:02:11.313406   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 19:02:11.319784   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 19:02:11.326282   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 19:02:11.332797   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 19:02:11.339707   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 19:02:11.347446   30712 kubeadm.go:391] StartCluster: {Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.232 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:02:11.347573   30712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 19:02:11.347614   30712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 19:02:11.399260   30712 cri.go:89] found id: "1211fe8cf15a145726136383a04b807104fd7b5d177b97cd1a5a6edae325cf97"
	I0425 19:02:11.399281   30712 cri.go:89] found id: "ef1831847cd85fa4ac3e3f05b1280b29e6a5a53ca491342d6634a119e3dff4f4"
	I0425 19:02:11.399285   30712 cri.go:89] found id: "65857c225af2b5971d31044aaaa5a7c2b1134e809bd7c368565df21afa7b2735"
	I0425 19:02:11.399289   30712 cri.go:89] found id: "7b85242a1dd03e4116bf4a4a811d120c72ac40179e8fde0fe2d73503f49c8737"
	I0425 19:02:11.399292   30712 cri.go:89] found id: "8479138ced5e5a6b00b685a1538c683197de7083d857d194836fcffa26fc2cfb"
	I0425 19:02:11.399295   30712 cri.go:89] found id: "853ae533d68261b7aaa8b7604ae60d64f17d8fa31a0f38accbfb5a4fc7f51012"
	I0425 19:02:11.399297   30712 cri.go:89] found id: "5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786"
	I0425 19:02:11.399300   30712 cri.go:89] found id: "877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275"
	I0425 19:02:11.399304   30712 cri.go:89] found id: "35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386"
	I0425 19:02:11.399310   30712 cri.go:89] found id: "e24e946cc9871d59976b6e84efd38da336416d3442e75673080a8e5eb92ed6d4"
	I0425 19:02:11.399318   30712 cri.go:89] found id: "6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5"
	I0425 19:02:11.399323   30712 cri.go:89] found id: "860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f"
	I0425 19:02:11.399327   30712 cri.go:89] found id: "9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98"
	I0425 19:02:11.399331   30712 cri.go:89] found id: "8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353"
	I0425 19:02:11.399339   30712 cri.go:89] found id: ""
	I0425 19:02:11.399384   30712 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.146451947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714071883146411574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=296358ed-a2dc-418a-8d9e-592b1f44accf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.147563446Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ff0b4e2-8e37-4c4a-8ddf-3bec94a5372e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.147650860Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ff0b4e2-8e37-4c4a-8ddf-3bec94a5372e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.148338911Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:664d121edb6b713211c13c0cedfbd4e6ff816158d01902cca3a3dc628d413f71,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714071817893557387,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b482009bb4bda86ed80aaf6ffbbdaeac0d3c80aac4919534d3d93ff7a0cfd128,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071802893423666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35af403e5f5b77c282e2ab8be29c6a089e75d1f1c8a54fd06c0799c3de43e0d1,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071777898678801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9375cf649d3fdd55b73e4c5640030d0b39a95f084260b601490e3388f4820a6a,PodSandboxId:2b8901f4a6c6a571896ff7dd2b68466ed43867b879bde5af06d0be6b525dc65d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071770037840128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1257292410198065593c6f0f876643b3d20d2dd3e8011891b55d35e4758d63,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071768185327503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd34a8712f61ebcf0d16486428c1a8ae453956567861a37e43f74936bb9d32f,PodSandboxId:d3c2d7d029f167c48c7289d45bccaf1c339aed778ac71b4d716cd26fce459c95,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071751848498714,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469b88169de51b24d813181338c887bc,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:9a710c78ee141c7c5c9eb1a047b80fdb89959cf74148c464b8565c4350725fea,PodSandboxId:f2c84e148f9ed49d3c243d2f4ac490df3be9fdd31e14b148d7b417aaf79b7837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071737446777323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:7666e74773b8beaa8c78a2cffc10db8d396168ac8eb484af76b1d5dad8cdf736,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714071737402882286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380ad5799
738ff5b76de6315d59529ac9c8a67ba2e59ae5eead7ec951d80f6b7,PodSandboxId:c14af9e5af973eadc39cc9450066a894ed0fc80b6553e93c87ffacafc89f2c87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736619518745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714071736691086314,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8279db081c114756c0ef4369b7f2dcd81110abcda6769ff15356ef16d82899f,PodSandboxId:a49728b483c24f26ec07260fa0afa5e2160b2520c679e2d60b5d5bda447d6150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736643374877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3997d681dd3c6abf6fecf3119895f445d9d960e69cc4d6b33b77f4313810dda6,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714071736590333518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f799a7e1725b4f3b7c0a031b6fada2efc97f1662c8c5d5759c4beedb20b3807,PodSandboxId:fa91a613ac5de27f3594fc1fb14797d03ecfab3c4f49bca5b9135600c41cbfb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071736462151375,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b5eacd47457075997143150b5a47f1e32bc6ab5272420955b83158111ce6a3,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714071736420443998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74
c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e670ab4471745be8eecfacef997853b6afd5e8508a46b249cc8831adbbaf33,PodSandboxId:ac490e91cdf368f8ebbad78a2c6ce66b8f402bcf55de23c1889a0f0e2e13dfb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071736407586084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kuberne
tes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714071248602464773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernete
s.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034742632420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034727910480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714071032735268131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714071012728272369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714071012719284298,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ff0b4e2-8e37-4c4a-8ddf-3bec94a5372e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.199342146Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dff009f9-b944-4346-ad8d-b5be0c79cacb name=/runtime.v1.RuntimeService/Version
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.199448911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dff009f9-b944-4346-ad8d-b5be0c79cacb name=/runtime.v1.RuntimeService/Version
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.200964051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7408c40d-52c3-4ee0-a946-ef7a77917cba name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.202184593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714071883202155468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7408c40d-52c3-4ee0-a946-ef7a77917cba name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.203136467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42151e71-92c7-4a08-9feb-f60d88b260cf name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.203220077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42151e71-92c7-4a08-9feb-f60d88b260cf name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.204132165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:664d121edb6b713211c13c0cedfbd4e6ff816158d01902cca3a3dc628d413f71,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714071817893557387,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b482009bb4bda86ed80aaf6ffbbdaeac0d3c80aac4919534d3d93ff7a0cfd128,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071802893423666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35af403e5f5b77c282e2ab8be29c6a089e75d1f1c8a54fd06c0799c3de43e0d1,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071777898678801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9375cf649d3fdd55b73e4c5640030d0b39a95f084260b601490e3388f4820a6a,PodSandboxId:2b8901f4a6c6a571896ff7dd2b68466ed43867b879bde5af06d0be6b525dc65d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071770037840128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1257292410198065593c6f0f876643b3d20d2dd3e8011891b55d35e4758d63,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071768185327503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd34a8712f61ebcf0d16486428c1a8ae453956567861a37e43f74936bb9d32f,PodSandboxId:d3c2d7d029f167c48c7289d45bccaf1c339aed778ac71b4d716cd26fce459c95,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071751848498714,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469b88169de51b24d813181338c887bc,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:9a710c78ee141c7c5c9eb1a047b80fdb89959cf74148c464b8565c4350725fea,PodSandboxId:f2c84e148f9ed49d3c243d2f4ac490df3be9fdd31e14b148d7b417aaf79b7837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071737446777323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:7666e74773b8beaa8c78a2cffc10db8d396168ac8eb484af76b1d5dad8cdf736,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714071737402882286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380ad5799
738ff5b76de6315d59529ac9c8a67ba2e59ae5eead7ec951d80f6b7,PodSandboxId:c14af9e5af973eadc39cc9450066a894ed0fc80b6553e93c87ffacafc89f2c87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736619518745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714071736691086314,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8279db081c114756c0ef4369b7f2dcd81110abcda6769ff15356ef16d82899f,PodSandboxId:a49728b483c24f26ec07260fa0afa5e2160b2520c679e2d60b5d5bda447d6150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736643374877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3997d681dd3c6abf6fecf3119895f445d9d960e69cc4d6b33b77f4313810dda6,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714071736590333518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f799a7e1725b4f3b7c0a031b6fada2efc97f1662c8c5d5759c4beedb20b3807,PodSandboxId:fa91a613ac5de27f3594fc1fb14797d03ecfab3c4f49bca5b9135600c41cbfb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071736462151375,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b5eacd47457075997143150b5a47f1e32bc6ab5272420955b83158111ce6a3,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714071736420443998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74
c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e670ab4471745be8eecfacef997853b6afd5e8508a46b249cc8831adbbaf33,PodSandboxId:ac490e91cdf368f8ebbad78a2c6ce66b8f402bcf55de23c1889a0f0e2e13dfb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071736407586084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kuberne
tes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714071248602464773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernete
s.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034742632420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034727910480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714071032735268131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714071012728272369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714071012719284298,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42151e71-92c7-4a08-9feb-f60d88b260cf name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.263115150Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1436ea21-b3d4-4c09-b481-552ef48ad102 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.263211905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1436ea21-b3d4-4c09-b481-552ef48ad102 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.265648320Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=febe4d97-678f-4687-b612-12a509053ae1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.266147363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714071883266123220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=febe4d97-678f-4687-b612-12a509053ae1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.266829050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edb5d0f5-4cf1-4d30-a340-5649ee75464b name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.266928365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edb5d0f5-4cf1-4d30-a340-5649ee75464b name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.267329160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:664d121edb6b713211c13c0cedfbd4e6ff816158d01902cca3a3dc628d413f71,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714071817893557387,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b482009bb4bda86ed80aaf6ffbbdaeac0d3c80aac4919534d3d93ff7a0cfd128,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071802893423666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35af403e5f5b77c282e2ab8be29c6a089e75d1f1c8a54fd06c0799c3de43e0d1,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071777898678801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9375cf649d3fdd55b73e4c5640030d0b39a95f084260b601490e3388f4820a6a,PodSandboxId:2b8901f4a6c6a571896ff7dd2b68466ed43867b879bde5af06d0be6b525dc65d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071770037840128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1257292410198065593c6f0f876643b3d20d2dd3e8011891b55d35e4758d63,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071768185327503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd34a8712f61ebcf0d16486428c1a8ae453956567861a37e43f74936bb9d32f,PodSandboxId:d3c2d7d029f167c48c7289d45bccaf1c339aed778ac71b4d716cd26fce459c95,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071751848498714,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469b88169de51b24d813181338c887bc,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:9a710c78ee141c7c5c9eb1a047b80fdb89959cf74148c464b8565c4350725fea,PodSandboxId:f2c84e148f9ed49d3c243d2f4ac490df3be9fdd31e14b148d7b417aaf79b7837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071737446777323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:7666e74773b8beaa8c78a2cffc10db8d396168ac8eb484af76b1d5dad8cdf736,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714071737402882286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380ad5799
738ff5b76de6315d59529ac9c8a67ba2e59ae5eead7ec951d80f6b7,PodSandboxId:c14af9e5af973eadc39cc9450066a894ed0fc80b6553e93c87ffacafc89f2c87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736619518745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714071736691086314,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8279db081c114756c0ef4369b7f2dcd81110abcda6769ff15356ef16d82899f,PodSandboxId:a49728b483c24f26ec07260fa0afa5e2160b2520c679e2d60b5d5bda447d6150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736643374877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3997d681dd3c6abf6fecf3119895f445d9d960e69cc4d6b33b77f4313810dda6,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714071736590333518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f799a7e1725b4f3b7c0a031b6fada2efc97f1662c8c5d5759c4beedb20b3807,PodSandboxId:fa91a613ac5de27f3594fc1fb14797d03ecfab3c4f49bca5b9135600c41cbfb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071736462151375,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b5eacd47457075997143150b5a47f1e32bc6ab5272420955b83158111ce6a3,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714071736420443998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74
c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e670ab4471745be8eecfacef997853b6afd5e8508a46b249cc8831adbbaf33,PodSandboxId:ac490e91cdf368f8ebbad78a2c6ce66b8f402bcf55de23c1889a0f0e2e13dfb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071736407586084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kuberne
tes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714071248602464773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernete
s.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034742632420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034727910480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714071032735268131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714071012728272369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714071012719284298,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edb5d0f5-4cf1-4d30-a340-5649ee75464b name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.323183935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79f8ed95-36ed-4c5f-b1a1-3f58d413f225 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.323318621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79f8ed95-36ed-4c5f-b1a1-3f58d413f225 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.324980861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=194bb98a-cae1-408f-896b-0c5ba2418e37 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.325581774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714071883325552587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=194bb98a-cae1-408f-896b-0c5ba2418e37 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.326333164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a74e96b8-e625-4d13-b43b-1b98ee1f1217 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.326437731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a74e96b8-e625-4d13-b43b-1b98ee1f1217 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:04:43 ha-912667 crio[3935]: time="2024-04-25 19:04:43.327136055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:664d121edb6b713211c13c0cedfbd4e6ff816158d01902cca3a3dc628d413f71,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714071817893557387,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b482009bb4bda86ed80aaf6ffbbdaeac0d3c80aac4919534d3d93ff7a0cfd128,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071802893423666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35af403e5f5b77c282e2ab8be29c6a089e75d1f1c8a54fd06c0799c3de43e0d1,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071777898678801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9375cf649d3fdd55b73e4c5640030d0b39a95f084260b601490e3388f4820a6a,PodSandboxId:2b8901f4a6c6a571896ff7dd2b68466ed43867b879bde5af06d0be6b525dc65d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071770037840128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1257292410198065593c6f0f876643b3d20d2dd3e8011891b55d35e4758d63,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071768185327503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd34a8712f61ebcf0d16486428c1a8ae453956567861a37e43f74936bb9d32f,PodSandboxId:d3c2d7d029f167c48c7289d45bccaf1c339aed778ac71b4d716cd26fce459c95,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071751848498714,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469b88169de51b24d813181338c887bc,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:9a710c78ee141c7c5c9eb1a047b80fdb89959cf74148c464b8565c4350725fea,PodSandboxId:f2c84e148f9ed49d3c243d2f4ac490df3be9fdd31e14b148d7b417aaf79b7837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071737446777323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:7666e74773b8beaa8c78a2cffc10db8d396168ac8eb484af76b1d5dad8cdf736,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714071737402882286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380ad5799
738ff5b76de6315d59529ac9c8a67ba2e59ae5eead7ec951d80f6b7,PodSandboxId:c14af9e5af973eadc39cc9450066a894ed0fc80b6553e93c87ffacafc89f2c87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736619518745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714071736691086314,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8279db081c114756c0ef4369b7f2dcd81110abcda6769ff15356ef16d82899f,PodSandboxId:a49728b483c24f26ec07260fa0afa5e2160b2520c679e2d60b5d5bda447d6150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736643374877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3997d681dd3c6abf6fecf3119895f445d9d960e69cc4d6b33b77f4313810dda6,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714071736590333518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f799a7e1725b4f3b7c0a031b6fada2efc97f1662c8c5d5759c4beedb20b3807,PodSandboxId:fa91a613ac5de27f3594fc1fb14797d03ecfab3c4f49bca5b9135600c41cbfb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071736462151375,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b5eacd47457075997143150b5a47f1e32bc6ab5272420955b83158111ce6a3,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714071736420443998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74
c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e670ab4471745be8eecfacef997853b6afd5e8508a46b249cc8831adbbaf33,PodSandboxId:ac490e91cdf368f8ebbad78a2c6ce66b8f402bcf55de23c1889a0f0e2e13dfb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071736407586084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kuberne
tes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714071248602464773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernete
s.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034742632420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034727910480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714071032735268131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714071012728272369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714071012719284298,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a74e96b8-e625-4d13-b43b-1b98ee1f1217 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	664d121edb6b7       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               4                   8aa35c9f3e53f       kindnet-xlvjt
	b482009bb4bda       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   89a08f34ca242       storage-provisioner
	35af403e5f5b7       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   2                   2e26a50c7fc42       kube-controller-manager-ha-912667
	9375cf649d3fd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   2b8901f4a6c6a       busybox-fc5497c4f-nxhjn
	be12572924101       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            3                   0e0656ef80264       kube-apiserver-ha-912667
	2bd34a8712f61       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   d3c2d7d029f16       kube-vip-ha-912667
	9a710c78ee141       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      2 minutes ago        Running             kube-proxy                1                   f2c84e148f9ed       kube-proxy-mkgv5
	7666e74773b8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   89a08f34ca242       storage-provisioner
	15d248c866f48       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               3                   8aa35c9f3e53f       kindnet-xlvjt
	d8279db081c11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   a49728b483c24       coredns-7db6d8ff4d-h4s2h
	380ad5799738f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   c14af9e5af973       coredns-7db6d8ff4d-22wvx
	3997d681dd3c6       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      2 minutes ago        Exited              kube-controller-manager   1                   2e26a50c7fc42       kube-controller-manager-ha-912667
	5f799a7e1725b       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      2 minutes ago        Running             kube-scheduler            1                   fa91a613ac5de       kube-scheduler-ha-912667
	62b5eacd47457       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      2 minutes ago        Exited              kube-apiserver            2                   0e0656ef80264       kube-apiserver-ha-912667
	74e670ab44717       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   ac490e91cdf36       etcd-ha-912667
	cb806d6102b91       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   4a7d7ef3e980e       busybox-fc5497c4f-nxhjn
	5b5e973107f16       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   5f41aaba12a45       coredns-7db6d8ff4d-22wvx
	877510603b828       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   7eff20f80efe1       coredns-7db6d8ff4d-h4s2h
	35f0443a12a2f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      14 minutes ago       Exited              kube-proxy                0                   56d2b6ff099a0       kube-proxy-mkgv5
	6d0da8d06f797       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      14 minutes ago       Exited              kube-scheduler            0                   10902ac1c9f4f       kube-scheduler-ha-912667
	860c8d827dba6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   b27e008a10a06       etcd-ha-912667
	
	
	==> coredns [380ad5799738ff5b76de6315d59529ac9c8a67ba2e59ae5eead7ec951d80f6b7] <==
	Trace[617395045]: [10.001070639s] [10.001070639s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1985995287]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Apr-2024 19:02:21.356) (total time: 10002ms):
	Trace[1985995287]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (19:02:31.358)
	Trace[1985995287]: [10.002333169s] [10.002333169s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43328->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43328->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36734->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[422302785]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Apr-2024 19:02:28.301) (total time: 13532ms):
	Trace[422302785]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36734->10.96.0.1:443: read: connection reset by peer 13531ms (19:02:41.833)
	Trace[422302785]: [13.532325198s] [13.532325198s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36734->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43326->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43326->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786] <==
	[INFO] 10.244.0.4:32831 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001738929s
	[INFO] 10.244.1.2:38408 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00017538s
	[INFO] 10.244.2.2:37503 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003970142s
	[INFO] 10.244.2.2:40887 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218678s
	[INFO] 10.244.0.4:49981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001952122s
	[INFO] 10.244.0.4:56986 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183129s
	[INFO] 10.244.0.4:33316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126163s
	[INFO] 10.244.1.2:34817 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000365634s
	[INFO] 10.244.1.2:38909 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001350261s
	[INFO] 10.244.1.2:51802 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101088s
	[INFO] 10.244.2.2:47175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020899s
	[INFO] 10.244.2.2:46654 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000319039s
	[INFO] 10.244.2.2:36020 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135369s
	[INFO] 10.244.1.2:58245 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248988s
	[INFO] 10.244.1.2:45237 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202978s
	[INFO] 10.244.0.4:52108 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149798s
	[INFO] 10.244.0.4:52793 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093152s
	[INFO] 10.244.1.2:57128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187429s
	[INFO] 10.244.1.2:40536 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186246s
	[INFO] 10.244.1.2:52690 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120066s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275] <==
	[INFO] 10.244.0.4:51578 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122143s
	[INFO] 10.244.1.2:40259 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165953s
	[INFO] 10.244.1.2:39729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001829607s
	[INFO] 10.244.1.2:34733 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172404s
	[INFO] 10.244.1.2:45725 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129433s
	[INFO] 10.244.1.2:35820 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133249s
	[INFO] 10.244.2.2:40405 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00168841s
	[INFO] 10.244.0.4:40751 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000295717s
	[INFO] 10.244.0.4:35528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102349s
	[INFO] 10.244.0.4:36374 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00035359s
	[INFO] 10.244.0.4:51732 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098091s
	[INFO] 10.244.1.2:41291 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000329271s
	[INFO] 10.244.1.2:36756 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159777s
	[INFO] 10.244.2.2:54364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000374806s
	[INFO] 10.244.2.2:35469 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0003009s
	[INFO] 10.244.2.2:57557 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000412395s
	[INFO] 10.244.2.2:55375 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188342s
	[INFO] 10.244.0.4:50283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136579s
	[INFO] 10.244.0.4:60253 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000062518s
	[INFO] 10.244.1.2:48368 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000591883s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d8279db081c114756c0ef4369b7f2dcd81110abcda6769ff15356ef16d82899f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41672->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[443656988]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Apr-2024 19:02:28.534) (total time: 10606ms):
	Trace[443656988]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41672->10.96.0.1:443: read: connection reset by peer 10606ms (19:02:39.141)
	Trace[443656988]: [10.606796381s] [10.606796381s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41672->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41676->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[221408255]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Apr-2024 19:02:31.362) (total time: 10470ms):
	Trace[221408255]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41676->10.96.0.1:443: read: connection reset by peer 10470ms (19:02:41.832)
	Trace[221408255]: [10.470671401s] [10.470671401s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41676->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43400->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43400->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-912667
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T18_50_19_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:50:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:04:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:02:54 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:02:54 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:02:54 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:02:54 +0000   Thu, 25 Apr 2024 18:50:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    ha-912667
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3a8edadaa67460ebdc313c0c3e1c3f7
	  System UUID:                a3a8edad-aa67-460e-bdc3-13c0c3e1c3f7
	  Boot ID:                    dc005c29-5a5e-4df7-8967-c057d8b3aa0a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nxhjn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-22wvx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-h4s2h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-912667                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-xlvjt                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-912667             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-912667    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-mkgv5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-912667             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-912667                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 106s               kube-proxy       
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-912667 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-912667 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-912667 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                kubelet          Node ha-912667 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node ha-912667 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node ha-912667 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal   NodeReady                14m                kubelet          Node ha-912667 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Warning  ContainerGCFailed        3m25s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           96s                node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal   RegisteredNode           94s                node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal   RegisteredNode           35s                node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	
	
	Name:               ha-912667-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T18_52_33_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:52:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:04:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:03:43 +0000   Thu, 25 Apr 2024 19:02:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:03:43 +0000   Thu, 25 Apr 2024 19:02:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:03:43 +0000   Thu, 25 Apr 2024 19:02:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:03:43 +0000   Thu, 25 Apr 2024 19:02:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.66
	  Hostname:    ha-912667-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82894439088e4cc98841c062c296fef3
	  System UUID:                82894439-088e-4cc9-8841-c062c296fef3
	  Boot ID:                    5efcf1bd-8cfb-462d-98a7-2cfcf6ac7d39
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tcxzk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-912667-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-sq4lb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-912667-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-912667-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-rkbcp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-912667-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-912667-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 103s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                    node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-912667-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-912667-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-912667-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                    node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  NodeNotReady             8m47s                  node-controller  Node ha-912667-m02 status is now: NodeNotReady
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node ha-912667-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node ha-912667-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet          Node ha-912667-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           96s                    node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  RegisteredNode           94s                    node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  RegisteredNode           35s                    node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	
	
	Name:               ha-912667-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T18_53_46_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:53:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:04:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:04:10 +0000   Thu, 25 Apr 2024 18:53:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:04:10 +0000   Thu, 25 Apr 2024 18:53:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:04:10 +0000   Thu, 25 Apr 2024 18:53:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:04:10 +0000   Thu, 25 Apr 2024 18:53:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    ha-912667-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b314db0e66974911a4c3c03513ed8a46
	  System UUID:                b314db0e-6697-4911-a4c3-c03513ed8a46
	  Boot ID:                    d611c5ed-2f79-4739-8aa6-cd595c577d08
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6lkjg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-912667-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-gcbv6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-912667-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-912667-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-9zxln                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-912667-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-912667-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-912667-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-912667-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-912667-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	  Normal   RegisteredNode           94s                node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node ha-912667-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node ha-912667-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node ha-912667-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 63s                kubelet          Node ha-912667-m03 has been rebooted, boot id: d611c5ed-2f79-4739-8aa6-cd595c577d08
	  Normal   RegisteredNode           35s                node-controller  Node ha-912667-m03 event: Registered Node ha-912667-m03 in Controller
	
	
	Name:               ha-912667-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T18_54_45_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:54:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:04:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:04:34 +0000   Thu, 25 Apr 2024 19:04:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:04:34 +0000   Thu, 25 Apr 2024 19:04:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:04:34 +0000   Thu, 25 Apr 2024 19:04:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:04:34 +0000   Thu, 25 Apr 2024 19:04:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-912667-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6d1da6a42954aa3b31899cd270783aa
	  System UUID:                c6d1da6a-4295-4aa3-b318-99cd270783aa
	  Boot ID:                    532d016c-b414-4642-af4e-a25f0615f501
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4l974       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m59s
	  kube-system                 kube-proxy-64vg4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 9m53s              kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-912667-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-912667-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-912667-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m59s              kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m58s              node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   RegisteredNode           9m58s              node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   RegisteredNode           9m56s              node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   NodeReady                9m49s              kubelet          Node ha-912667-m04 status is now: NodeReady
	  Normal   RegisteredNode           97s                node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   NodeNotReady             57s                node-controller  Node ha-912667-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           36s                node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   Starting                 10s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10s (x2 over 10s)  kubelet          Node ha-912667-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10s (x2 over 10s)  kubelet          Node ha-912667-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10s (x2 over 10s)  kubelet          Node ha-912667-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 10s                kubelet          Node ha-912667-m04 has been rebooted, boot id: 532d016c-b414-4642-af4e-a25f0615f501
	  Normal   NodeReady                10s                kubelet          Node ha-912667-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr25 18:50] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.058108] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076447] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.197185] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.122034] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.313908] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.923241] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.067466] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.659823] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.462418] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.581179] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.076665] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.874397] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.005828] kauditd_printk_skb: 74 callbacks suppressed
	[Apr25 18:59] kauditd_printk_skb: 1 callbacks suppressed
	[Apr25 19:02] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
	[  +0.155948] systemd-fstab-generator[3867]: Ignoring "noauto" option for root device
	[  +0.182921] systemd-fstab-generator[3881]: Ignoring "noauto" option for root device
	[  +0.158504] systemd-fstab-generator[3893]: Ignoring "noauto" option for root device
	[  +0.304221] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[  +1.482834] systemd-fstab-generator[4024]: Ignoring "noauto" option for root device
	[  +5.386683] kauditd_printk_skb: 122 callbacks suppressed
	[ +13.225322] kauditd_printk_skb: 86 callbacks suppressed
	[  +9.059996] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.857180] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [74e670ab4471745be8eecfacef997853b6afd5e8508a46b249cc8831adbbaf33] <==
	{"level":"warn","ts":"2024-04-25T19:03:34.889825Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T19:03:34.990438Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6fb28b9aae66857a","from":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-25T19:03:37.631768Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2f342753978b2ebf","rtt":"0s","error":"dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:37.631814Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2f342753978b2ebf","rtt":"0s","error":"dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:38.062012Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.179:2380/version","remote-member-id":"2f342753978b2ebf","error":"Get \"https://192.168.39.179:2380/version\": dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:38.062564Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2f342753978b2ebf","error":"Get \"https://192.168.39.179:2380/version\": dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:42.064788Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.179:2380/version","remote-member-id":"2f342753978b2ebf","error":"Get \"https://192.168.39.179:2380/version\": dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:42.06492Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2f342753978b2ebf","error":"Get \"https://192.168.39.179:2380/version\": dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:42.631933Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2f342753978b2ebf","rtt":"0s","error":"dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:42.632075Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2f342753978b2ebf","rtt":"0s","error":"dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:46.067373Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.179:2380/version","remote-member-id":"2f342753978b2ebf","error":"Get \"https://192.168.39.179:2380/version\": dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:46.067491Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2f342753978b2ebf","error":"Get \"https://192.168.39.179:2380/version\": dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:47.632069Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2f342753978b2ebf","rtt":"0s","error":"dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:47.632289Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2f342753978b2ebf","rtt":"0s","error":"dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:50.069784Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.179:2380/version","remote-member-id":"2f342753978b2ebf","error":"Get \"https://192.168.39.179:2380/version\": dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:50.069825Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2f342753978b2ebf","error":"Get \"https://192.168.39.179:2380/version\": dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-25T19:03:51.405658Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6fb28b9aae66857a","to":"2f342753978b2ebf","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-25T19:03:51.405957Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:03:51.405984Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:03:51.406519Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6fb28b9aae66857a","to":"2f342753978b2ebf","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-25T19:03:51.406604Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:03:51.422262Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:03:51.423991Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"warn","ts":"2024-04-25T19:03:52.632795Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2f342753978b2ebf","rtt":"0s","error":"dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:52.632975Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2f342753978b2ebf","rtt":"0s","error":"dial tcp 192.168.39.179:2380: connect: connection refused"}
	
	
	==> etcd [860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f] <==
	2024/04/25 19:00:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/25 19:00:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/25 19:00:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/25 19:00:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-25T19:00:36.809371Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9618157281405767419,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-04-25T19:00:37.015553Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-25T19:00:37.015622Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-25T19:00:37.015758Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6fb28b9aae66857a","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-25T19:00:37.016008Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016152Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016311Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016604Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.01674Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016844Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016964Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016995Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017101Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017243Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017482Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017583Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017797Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017874Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.021154Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2024-04-25T19:00:37.021482Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2024-04-25T19:00:37.021583Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-912667","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	
	
	==> kernel <==
	 19:04:44 up 15 min,  0 users,  load average: 0.37, 0.49, 0.35
	Linux ha-912667 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef] <==
	I0425 19:02:17.256753       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0425 19:02:27.563414       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0425 19:02:29.544233       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0425 19:02:41.832388       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.66:48974->10.96.0.1:443: read: connection reset by peer
	I0425 19:02:43.833220       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0425 19:02:46.834947       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [664d121edb6b713211c13c0cedfbd4e6ff816158d01902cca3a3dc628d413f71] <==
	I0425 19:04:08.916362       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 19:04:18.927447       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 19:04:18.927834       1 main.go:227] handling current node
	I0425 19:04:18.927881       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 19:04:18.927903       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 19:04:18.928109       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0425 19:04:18.928137       1 main.go:250] Node ha-912667-m03 has CIDR [10.244.2.0/24] 
	I0425 19:04:18.928227       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 19:04:18.928267       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 19:04:28.943343       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 19:04:28.943392       1 main.go:227] handling current node
	I0425 19:04:28.943405       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 19:04:28.943420       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 19:04:28.943608       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0425 19:04:28.943647       1 main.go:250] Node ha-912667-m03 has CIDR [10.244.2.0/24] 
	I0425 19:04:28.943832       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 19:04:28.943872       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 19:04:38.952899       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 19:04:38.952945       1 main.go:227] handling current node
	I0425 19:04:38.952956       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 19:04:38.952962       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 19:04:38.953066       1 main.go:223] Handling node with IPs: map[192.168.39.179:{}]
	I0425 19:04:38.953072       1 main.go:250] Node ha-912667-m03 has CIDR [10.244.2.0/24] 
	I0425 19:04:38.953114       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 19:04:38.953118       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [62b5eacd47457075997143150b5a47f1e32bc6ab5272420955b83158111ce6a3] <==
	I0425 19:02:17.135384       1 options.go:221] external host was not specified, using 192.168.39.189
	I0425 19:02:17.141927       1 server.go:148] Version: v1.30.0
	I0425 19:02:17.141977       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:02:18.130122       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0425 19:02:18.134094       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0425 19:02:18.134144       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0425 19:02:18.134306       1 instance.go:299] Using reconciler: lease
	I0425 19:02:18.134813       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0425 19:02:38.125080       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0425 19:02:38.131211       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0425 19:02:38.135673       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0425 19:02:38.135679       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [be1257292410198065593c6f0f876643b3d20d2dd3e8011891b55d35e4758d63] <==
	I0425 19:02:50.399606       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0425 19:02:50.401861       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0425 19:02:50.481048       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0425 19:02:50.481113       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0425 19:02:50.482004       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0425 19:02:50.482075       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0425 19:02:50.485191       1 shared_informer.go:320] Caches are synced for configmaps
	I0425 19:02:50.486218       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0425 19:02:50.490245       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0425 19:02:50.490325       1 aggregator.go:165] initial CRD sync complete...
	I0425 19:02:50.490354       1 autoregister_controller.go:141] Starting autoregister controller
	I0425 19:02:50.490395       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0425 19:02:50.490405       1 cache.go:39] Caches are synced for autoregister controller
	I0425 19:02:50.492523       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0425 19:02:50.527371       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.179]
	I0425 19:02:50.528944       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0425 19:02:50.530308       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0425 19:02:50.530329       1 policy_source.go:224] refreshing policies
	I0425 19:02:50.582040       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0425 19:02:50.629671       1 controller.go:615] quota admission added evaluator for: endpoints
	I0425 19:02:50.655439       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0425 19:02:50.672625       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0425 19:02:51.387496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0425 19:02:52.280675       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.179 192.168.39.189]
	W0425 19:03:12.105849       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.189 192.168.39.66]
	
	
	==> kube-controller-manager [35af403e5f5b77c282e2ab8be29c6a089e75d1f1c8a54fd06c0799c3de43e0d1] <==
	I0425 19:03:09.487578       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-912667-m04"
	I0425 19:03:09.488483       1 shared_informer.go:320] Caches are synced for endpoint
	I0425 19:03:09.488813       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0425 19:03:09.553718       1 shared_informer.go:320] Caches are synced for crt configmap
	I0425 19:03:09.591343       1 shared_informer.go:320] Caches are synced for PVC protection
	I0425 19:03:09.592859       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0425 19:03:09.606086       1 shared_informer.go:320] Caches are synced for stateful set
	I0425 19:03:09.618413       1 shared_informer.go:320] Caches are synced for attach detach
	I0425 19:03:09.624890       1 shared_informer.go:320] Caches are synced for resource quota
	I0425 19:03:09.633427       1 shared_informer.go:320] Caches are synced for persistent volume
	I0425 19:03:09.647051       1 shared_informer.go:320] Caches are synced for ephemeral
	I0425 19:03:09.650604       1 shared_informer.go:320] Caches are synced for expand
	I0425 19:03:09.671839       1 shared_informer.go:320] Caches are synced for resource quota
	I0425 19:03:10.101127       1 shared_informer.go:320] Caches are synced for garbage collector
	I0425 19:03:10.150955       1 shared_informer.go:320] Caches are synced for garbage collector
	I0425 19:03:10.151071       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0425 19:03:14.550653       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-fkn97 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-fkn97\": the object has been modified; please apply your changes to the latest version and try again"
	I0425 19:03:14.550870       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"5b37e3de-4756-46c4-bc64-e19ad4c50ea2", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-fkn97 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-fkn97": the object has been modified; please apply your changes to the latest version and try again
	I0425 19:03:14.572386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.267866ms"
	I0425 19:03:14.572621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.091µs"
	I0425 19:03:40.989243       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.561895ms"
	I0425 19:03:40.991021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="145.951µs"
	I0425 19:04:04.767828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.691251ms"
	I0425 19:04:04.767989       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.281µs"
	I0425 19:04:34.929894       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-912667-m04"
	
	
	==> kube-controller-manager [3997d681dd3c6abf6fecf3119895f445d9d960e69cc4d6b33b77f4313810dda6] <==
	I0425 19:02:18.301464       1 serving.go:380] Generated self-signed cert in-memory
	I0425 19:02:18.625815       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0425 19:02:18.625868       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:02:18.627832       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0425 19:02:18.628112       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0425 19:02:18.628514       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0425 19:02:18.629347       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0425 19:02:39.142611       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.189:8443/healthz\": dial tcp 192.168.39.189:8443: connect: connection refused"
	
	
	==> kube-proxy [35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386] <==
	E0425 18:59:24.332835       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:27.401786       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:27.401880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:27.401946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:27.401787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:27.401972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:27.401909       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:33.547076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:33.547159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:33.547221       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:33.547249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:36.617847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:36.617973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:42.761607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:42.761743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:45.832905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:45.832984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:45.833119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:45.833141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 19:00:01.192303       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 19:00:01.192447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 19:00:01.192670       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 19:00:01.192796       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 19:00:07.339111       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 19:00:07.339368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [9a710c78ee141c7c5c9eb1a047b80fdb89959cf74148c464b8565c4350725fea] <==
	I0425 19:02:18.320401       1 server_linux.go:69] "Using iptables proxy"
	E0425 19:02:19.433074       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-912667\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0425 19:02:22.505405       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-912667\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0425 19:02:25.576158       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-912667\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0425 19:02:31.722379       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-912667\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0425 19:02:40.937422       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-912667\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0425 19:02:57.501486       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.189"]
	I0425 19:02:57.555078       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:02:57.555159       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:02:57.555181       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:02:57.558439       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:02:57.558820       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:02:57.558864       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:02:57.560336       1 config.go:192] "Starting service config controller"
	I0425 19:02:57.560455       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:02:57.560486       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:02:57.560490       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:02:57.561413       1 config.go:319] "Starting node config controller"
	I0425 19:02:57.561448       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:02:57.660627       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:02:57.660671       1 shared_informer.go:320] Caches are synced for service config
	I0425 19:02:57.662198       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5f799a7e1725b4f3b7c0a031b6fada2efc97f1662c8c5d5759c4beedb20b3807] <==
	W0425 19:02:47.154408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.189:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.154485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.189:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:47.205805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.189:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.205944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.189:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:47.289033       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.189:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.289201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.189:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:47.398187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.189:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.398302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.189:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:47.418066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.189:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.418156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.189:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:47.973204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.189:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.973339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.189:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:48.315135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.189:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:48.315227       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.189:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:50.408241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 19:02:50.408302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 19:02:50.408482       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 19:02:50.408522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0425 19:02:50.408593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0425 19:02:50.408632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0425 19:02:50.408680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 19:02:50.408773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 19:02:50.412877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 19:02:50.413044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0425 19:02:53.147673       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5] <==
	W0425 19:00:33.125154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 19:00:33.125323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 19:00:33.243120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0425 19:00:33.243228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0425 19:00:33.264627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 19:00:33.264832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0425 19:00:33.351271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0425 19:00:33.351370       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0425 19:00:33.420989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 19:00:33.421116       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 19:00:33.471445       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 19:00:33.471565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0425 19:00:33.762935       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 19:00:33.763068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 19:00:34.283302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 19:00:34.283431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 19:00:34.542249       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 19:00:34.542378       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0425 19:00:34.602073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0425 19:00:34.602253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0425 19:00:34.660855       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 19:00:34.660966       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 19:00:35.019260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:00:35.019330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:00:36.706567       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 25 19:02:55 ha-912667 kubelet[1386]: I0425 19:02:55.032152    1386 scope.go:117] "RemoveContainer" containerID="1211fe8cf15a145726136383a04b807104fd7b5d177b97cd1a5a6edae325cf97"
	Apr 25 19:02:55 ha-912667 kubelet[1386]: I0425 19:02:55.032630    1386 scope.go:117] "RemoveContainer" containerID="15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef"
	Apr 25 19:02:55 ha-912667 kubelet[1386]: E0425 19:02:55.033172    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-xlvjt_kube-system(191ff28e-07d7-459e-afe5-e3d8c23e1016)\"" pod="kube-system/kindnet-xlvjt" podUID="191ff28e-07d7-459e-afe5-e3d8c23e1016"
	Apr 25 19:02:57 ha-912667 kubelet[1386]: I0425 19:02:57.880389    1386 scope.go:117] "RemoveContainer" containerID="3997d681dd3c6abf6fecf3119895f445d9d960e69cc4d6b33b77f4313810dda6"
	Apr 25 19:03:07 ha-912667 kubelet[1386]: I0425 19:03:07.880125    1386 scope.go:117] "RemoveContainer" containerID="7666e74773b8beaa8c78a2cffc10db8d396168ac8eb484af76b1d5dad8cdf736"
	Apr 25 19:03:07 ha-912667 kubelet[1386]: E0425 19:03:07.880420    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f3a0b111-609d-49b3-a056-71eb4b641224)\"" pod="kube-system/storage-provisioner" podUID="f3a0b111-609d-49b3-a056-71eb4b641224"
	Apr 25 19:03:09 ha-912667 kubelet[1386]: I0425 19:03:09.879480    1386 scope.go:117] "RemoveContainer" containerID="15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef"
	Apr 25 19:03:09 ha-912667 kubelet[1386]: E0425 19:03:09.879881    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-xlvjt_kube-system(191ff28e-07d7-459e-afe5-e3d8c23e1016)\"" pod="kube-system/kindnet-xlvjt" podUID="191ff28e-07d7-459e-afe5-e3d8c23e1016"
	Apr 25 19:03:18 ha-912667 kubelet[1386]: E0425 19:03:18.918525    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:03:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:03:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:03:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:03:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 19:03:19 ha-912667 kubelet[1386]: I0425 19:03:19.174490    1386 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-nxhjn" podStartSLOduration=552.633112301 podStartE2EDuration="9m15.174458351s" podCreationTimestamp="2024-04-25 18:54:04 +0000 UTC" firstStartedPulling="2024-04-25 18:54:06.043791927 +0000 UTC m=+227.337430631" lastFinishedPulling="2024-04-25 18:54:08.585137978 +0000 UTC m=+229.878776681" observedRunningTime="2024-04-25 18:54:08.925037418 +0000 UTC m=+230.218676138" watchObservedRunningTime="2024-04-25 19:03:19.174458351 +0000 UTC m=+780.468097075"
	Apr 25 19:03:22 ha-912667 kubelet[1386]: I0425 19:03:22.879828    1386 scope.go:117] "RemoveContainer" containerID="7666e74773b8beaa8c78a2cffc10db8d396168ac8eb484af76b1d5dad8cdf736"
	Apr 25 19:03:23 ha-912667 kubelet[1386]: I0425 19:03:23.880676    1386 scope.go:117] "RemoveContainer" containerID="15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef"
	Apr 25 19:03:23 ha-912667 kubelet[1386]: E0425 19:03:23.881292    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-xlvjt_kube-system(191ff28e-07d7-459e-afe5-e3d8c23e1016)\"" pod="kube-system/kindnet-xlvjt" podUID="191ff28e-07d7-459e-afe5-e3d8c23e1016"
	Apr 25 19:03:37 ha-912667 kubelet[1386]: I0425 19:03:37.879992    1386 scope.go:117] "RemoveContainer" containerID="15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef"
	Apr 25 19:03:57 ha-912667 kubelet[1386]: I0425 19:03:57.879915    1386 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-912667" podUID="bd3267a7-206d-4e47-b154-a7f17a492684"
	Apr 25 19:03:57 ha-912667 kubelet[1386]: I0425 19:03:57.901622    1386 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-912667"
	Apr 25 19:04:18 ha-912667 kubelet[1386]: E0425 19:04:18.915613    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:04:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:04:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:04:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:04:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:04:42.787556   32061 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18757-6355/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-912667 -n ha-912667
helpers_test.go:261: (dbg) Run:  kubectl --context ha-912667 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (372.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 stop -v=7 --alsologtostderr
E0425 19:05:45.439160   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912667 stop -v=7 --alsologtostderr: exit status 82 (2m0.494205061s)

                                                
                                                
-- stdout --
	* Stopping node "ha-912667-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 19:05:03.169843   32470 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:05:03.170090   32470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:05:03.170101   32470 out.go:304] Setting ErrFile to fd 2...
	I0425 19:05:03.170105   32470 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:05:03.170298   32470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:05:03.170518   32470 out.go:298] Setting JSON to false
	I0425 19:05:03.170592   32470 mustload.go:65] Loading cluster: ha-912667
	I0425 19:05:03.170940   32470 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:05:03.171017   32470 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 19:05:03.171189   32470 mustload.go:65] Loading cluster: ha-912667
	I0425 19:05:03.171313   32470 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:05:03.171335   32470 stop.go:39] StopHost: ha-912667-m04
	I0425 19:05:03.171699   32470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:05:03.171735   32470 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:05:03.186426   32470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37011
	I0425 19:05:03.186886   32470 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:05:03.187510   32470 main.go:141] libmachine: Using API Version  1
	I0425 19:05:03.187533   32470 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:05:03.187879   32470 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:05:03.190497   32470 out.go:177] * Stopping node "ha-912667-m04"  ...
	I0425 19:05:03.192567   32470 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0425 19:05:03.192591   32470 main.go:141] libmachine: (ha-912667-m04) Calling .DriverName
	I0425 19:05:03.192862   32470 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0425 19:05:03.192896   32470 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHHostname
	I0425 19:05:03.195626   32470 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 19:05:03.196125   32470 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 20:04:26 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 19:05:03.196157   32470 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 19:05:03.196307   32470 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHPort
	I0425 19:05:03.196501   32470 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHKeyPath
	I0425 19:05:03.196677   32470 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHUsername
	I0425 19:05:03.196831   32470 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m04/id_rsa Username:docker}
	I0425 19:05:03.285906   32470 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0425 19:05:03.340857   32470 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0425 19:05:03.395010   32470 main.go:141] libmachine: Stopping "ha-912667-m04"...
	I0425 19:05:03.395038   32470 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 19:05:03.396629   32470 main.go:141] libmachine: (ha-912667-m04) Calling .Stop
	I0425 19:05:03.400127   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 0/120
	I0425 19:05:04.401924   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 1/120
	I0425 19:05:05.403247   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 2/120
	I0425 19:05:06.404975   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 3/120
	I0425 19:05:07.406965   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 4/120
	I0425 19:05:08.409298   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 5/120
	I0425 19:05:09.410705   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 6/120
	I0425 19:05:10.412398   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 7/120
	I0425 19:05:11.413648   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 8/120
	I0425 19:05:12.415252   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 9/120
	I0425 19:05:13.417047   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 10/120
	I0425 19:05:14.418724   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 11/120
	I0425 19:05:15.421052   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 12/120
	I0425 19:05:16.422548   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 13/120
	I0425 19:05:17.424615   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 14/120
	I0425 19:05:18.426424   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 15/120
	I0425 19:05:19.428580   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 16/120
	I0425 19:05:20.430001   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 17/120
	I0425 19:05:21.431598   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 18/120
	I0425 19:05:22.432827   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 19/120
	I0425 19:05:23.435173   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 20/120
	I0425 19:05:24.436611   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 21/120
	I0425 19:05:25.438225   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 22/120
	I0425 19:05:26.440156   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 23/120
	I0425 19:05:27.441477   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 24/120
	I0425 19:05:28.443176   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 25/120
	I0425 19:05:29.444542   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 26/120
	I0425 19:05:30.445879   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 27/120
	I0425 19:05:31.447080   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 28/120
	I0425 19:05:32.448477   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 29/120
	I0425 19:05:33.450893   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 30/120
	I0425 19:05:34.452874   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 31/120
	I0425 19:05:35.454362   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 32/120
	I0425 19:05:36.456769   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 33/120
	I0425 19:05:37.458954   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 34/120
	I0425 19:05:38.460757   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 35/120
	I0425 19:05:39.462102   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 36/120
	I0425 19:05:40.463582   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 37/120
	I0425 19:05:41.465096   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 38/120
	I0425 19:05:42.466617   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 39/120
	I0425 19:05:43.468535   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 40/120
	I0425 19:05:44.469767   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 41/120
	I0425 19:05:45.471513   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 42/120
	I0425 19:05:46.472879   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 43/120
	I0425 19:05:47.474265   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 44/120
	I0425 19:05:48.476308   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 45/120
	I0425 19:05:49.478185   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 46/120
	I0425 19:05:50.480328   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 47/120
	I0425 19:05:51.481665   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 48/120
	I0425 19:05:52.483226   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 49/120
	I0425 19:05:53.485318   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 50/120
	I0425 19:05:54.486638   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 51/120
	I0425 19:05:55.488869   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 52/120
	I0425 19:05:56.490167   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 53/120
	I0425 19:05:57.491967   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 54/120
	I0425 19:05:58.493829   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 55/120
	I0425 19:05:59.495186   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 56/120
	I0425 19:06:00.496420   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 57/120
	I0425 19:06:01.498442   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 58/120
	I0425 19:06:02.499899   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 59/120
	I0425 19:06:03.502275   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 60/120
	I0425 19:06:04.503617   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 61/120
	I0425 19:06:05.505756   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 62/120
	I0425 19:06:06.507095   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 63/120
	I0425 19:06:07.508659   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 64/120
	I0425 19:06:08.510534   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 65/120
	I0425 19:06:09.512887   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 66/120
	I0425 19:06:10.514323   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 67/120
	I0425 19:06:11.515736   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 68/120
	I0425 19:06:12.517160   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 69/120
	I0425 19:06:13.519272   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 70/120
	I0425 19:06:14.520890   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 71/120
	I0425 19:06:15.522106   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 72/120
	I0425 19:06:16.523440   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 73/120
	I0425 19:06:17.524605   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 74/120
	I0425 19:06:18.526382   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 75/120
	I0425 19:06:19.527816   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 76/120
	I0425 19:06:20.529291   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 77/120
	I0425 19:06:21.531146   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 78/120
	I0425 19:06:22.532666   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 79/120
	I0425 19:06:23.534668   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 80/120
	I0425 19:06:24.536675   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 81/120
	I0425 19:06:25.538939   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 82/120
	I0425 19:06:26.540295   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 83/120
	I0425 19:06:27.541576   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 84/120
	I0425 19:06:28.543493   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 85/120
	I0425 19:06:29.544869   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 86/120
	I0425 19:06:30.546337   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 87/120
	I0425 19:06:31.548645   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 88/120
	I0425 19:06:32.550126   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 89/120
	I0425 19:06:33.552515   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 90/120
	I0425 19:06:34.554018   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 91/120
	I0425 19:06:35.555467   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 92/120
	I0425 19:06:36.556785   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 93/120
	I0425 19:06:37.558156   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 94/120
	I0425 19:06:38.559934   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 95/120
	I0425 19:06:39.561422   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 96/120
	I0425 19:06:40.563067   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 97/120
	I0425 19:06:41.564399   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 98/120
	I0425 19:06:42.566115   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 99/120
	I0425 19:06:43.568150   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 100/120
	I0425 19:06:44.569506   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 101/120
	I0425 19:06:45.570956   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 102/120
	I0425 19:06:46.572634   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 103/120
	I0425 19:06:47.574027   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 104/120
	I0425 19:06:48.576069   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 105/120
	I0425 19:06:49.577410   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 106/120
	I0425 19:06:50.578829   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 107/120
	I0425 19:06:51.580621   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 108/120
	I0425 19:06:52.581994   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 109/120
	I0425 19:06:53.584143   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 110/120
	I0425 19:06:54.585579   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 111/120
	I0425 19:06:55.586994   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 112/120
	I0425 19:06:56.589209   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 113/120
	I0425 19:06:57.590597   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 114/120
	I0425 19:06:58.592558   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 115/120
	I0425 19:06:59.593877   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 116/120
	I0425 19:07:00.595371   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 117/120
	I0425 19:07:01.596610   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 118/120
	I0425 19:07:02.598045   32470 main.go:141] libmachine: (ha-912667-m04) Waiting for machine to stop 119/120
	I0425 19:07:03.599183   32470 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0425 19:07:03.599252   32470 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0425 19:07:03.600756   32470 out.go:177] 
	W0425 19:07:03.601957   32470 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0425 19:07:03.601979   32470 out.go:239] * 
	* 
	W0425 19:07:03.604441   32470 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 19:07:03.605729   32470 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-912667 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr: exit status 3 (18.963415847s)

                                                
                                                
-- stdout --
	ha-912667
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-912667-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 19:07:03.669321   32914 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:07:03.669574   32914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:07:03.669584   32914 out.go:304] Setting ErrFile to fd 2...
	I0425 19:07:03.669588   32914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:07:03.669775   32914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:07:03.669957   32914 out.go:298] Setting JSON to false
	I0425 19:07:03.669980   32914 mustload.go:65] Loading cluster: ha-912667
	I0425 19:07:03.670025   32914 notify.go:220] Checking for updates...
	I0425 19:07:03.670370   32914 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:07:03.670384   32914 status.go:255] checking status of ha-912667 ...
	I0425 19:07:03.670742   32914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:07:03.670795   32914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:07:03.690641   32914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0425 19:07:03.691157   32914 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:07:03.691733   32914 main.go:141] libmachine: Using API Version  1
	I0425 19:07:03.691761   32914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:07:03.692100   32914 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:07:03.692285   32914 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 19:07:03.693934   32914 status.go:330] ha-912667 host status = "Running" (err=<nil>)
	I0425 19:07:03.693948   32914 host.go:66] Checking if "ha-912667" exists ...
	I0425 19:07:03.694301   32914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:07:03.694348   32914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:07:03.709633   32914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0425 19:07:03.710003   32914 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:07:03.710484   32914 main.go:141] libmachine: Using API Version  1
	I0425 19:07:03.710518   32914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:07:03.710804   32914 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:07:03.710997   32914 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 19:07:03.713810   32914 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:07:03.714201   32914 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:07:03.714247   32914 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:07:03.714378   32914 host.go:66] Checking if "ha-912667" exists ...
	I0425 19:07:03.714691   32914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:07:03.714740   32914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:07:03.729804   32914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43869
	I0425 19:07:03.730242   32914 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:07:03.730723   32914 main.go:141] libmachine: Using API Version  1
	I0425 19:07:03.730748   32914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:07:03.731030   32914 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:07:03.731200   32914 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:07:03.731357   32914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 19:07:03.731384   32914 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:07:03.734220   32914 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:07:03.734600   32914 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:07:03.734629   32914 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:07:03.734754   32914 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:07:03.734906   32914 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:07:03.735056   32914 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:07:03.735156   32914 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 19:07:03.821618   32914 ssh_runner.go:195] Run: systemctl --version
	I0425 19:07:03.830470   32914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 19:07:03.851139   32914 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 19:07:03.851167   32914 api_server.go:166] Checking apiserver status ...
	I0425 19:07:03.851213   32914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 19:07:03.870217   32914 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5042/cgroup
	W0425 19:07:03.882566   32914 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5042/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 19:07:03.882633   32914 ssh_runner.go:195] Run: ls
	I0425 19:07:03.889319   32914 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 19:07:03.897938   32914 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 19:07:03.897972   32914 status.go:422] ha-912667 apiserver status = Running (err=<nil>)
	I0425 19:07:03.897986   32914 status.go:257] ha-912667 status: &{Name:ha-912667 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 19:07:03.898013   32914 status.go:255] checking status of ha-912667-m02 ...
	I0425 19:07:03.898358   32914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:07:03.898395   32914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:07:03.913474   32914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45815
	I0425 19:07:03.913936   32914 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:07:03.914418   32914 main.go:141] libmachine: Using API Version  1
	I0425 19:07:03.914444   32914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:07:03.914756   32914 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:07:03.914960   32914 main.go:141] libmachine: (ha-912667-m02) Calling .GetState
	I0425 19:07:03.916474   32914 status.go:330] ha-912667-m02 host status = "Running" (err=<nil>)
	I0425 19:07:03.916492   32914 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 19:07:03.916798   32914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:07:03.916837   32914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:07:03.932033   32914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I0425 19:07:03.932490   32914 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:07:03.933024   32914 main.go:141] libmachine: Using API Version  1
	I0425 19:07:03.933048   32914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:07:03.933402   32914 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:07:03.933567   32914 main.go:141] libmachine: (ha-912667-m02) Calling .GetIP
	I0425 19:07:03.936202   32914 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 19:07:03.936717   32914 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 20:02:23 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 19:07:03.936749   32914 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 19:07:03.936868   32914 host.go:66] Checking if "ha-912667-m02" exists ...
	I0425 19:07:03.937245   32914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:07:03.937299   32914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:07:03.952106   32914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46291
	I0425 19:07:03.952512   32914 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:07:03.952967   32914 main.go:141] libmachine: Using API Version  1
	I0425 19:07:03.952990   32914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:07:03.953283   32914 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:07:03.953454   32914 main.go:141] libmachine: (ha-912667-m02) Calling .DriverName
	I0425 19:07:03.953670   32914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 19:07:03.953695   32914 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHHostname
	I0425 19:07:03.956155   32914 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 19:07:03.956613   32914 main.go:141] libmachine: (ha-912667-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:58:a0", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 20:02:23 +0000 UTC Type:0 Mac:52:54:00:5a:58:a0 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-912667-m02 Clientid:01:52:54:00:5a:58:a0}
	I0425 19:07:03.956642   32914 main.go:141] libmachine: (ha-912667-m02) DBG | domain ha-912667-m02 has defined IP address 192.168.39.66 and MAC address 52:54:00:5a:58:a0 in network mk-ha-912667
	I0425 19:07:03.956797   32914 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHPort
	I0425 19:07:03.956962   32914 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHKeyPath
	I0425 19:07:03.957120   32914 main.go:141] libmachine: (ha-912667-m02) Calling .GetSSHUsername
	I0425 19:07:03.957264   32914 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m02/id_rsa Username:docker}
	I0425 19:07:04.048842   32914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 19:07:04.069466   32914 kubeconfig.go:125] found "ha-912667" server: "https://192.168.39.254:8443"
	I0425 19:07:04.069502   32914 api_server.go:166] Checking apiserver status ...
	I0425 19:07:04.069543   32914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 19:07:04.089106   32914 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup
	W0425 19:07:04.102054   32914 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 19:07:04.102107   32914 ssh_runner.go:195] Run: ls
	I0425 19:07:04.107811   32914 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0425 19:07:04.112753   32914 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0425 19:07:04.112781   32914 status.go:422] ha-912667-m02 apiserver status = Running (err=<nil>)
	I0425 19:07:04.112790   32914 status.go:257] ha-912667-m02 status: &{Name:ha-912667-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 19:07:04.112805   32914 status.go:255] checking status of ha-912667-m04 ...
	I0425 19:07:04.113127   32914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:07:04.113186   32914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:07:04.128262   32914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0425 19:07:04.128817   32914 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:07:04.129301   32914 main.go:141] libmachine: Using API Version  1
	I0425 19:07:04.129325   32914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:07:04.129681   32914 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:07:04.129866   32914 main.go:141] libmachine: (ha-912667-m04) Calling .GetState
	I0425 19:07:04.131608   32914 status.go:330] ha-912667-m04 host status = "Running" (err=<nil>)
	I0425 19:07:04.131625   32914 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 19:07:04.131917   32914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:07:04.131968   32914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:07:04.147272   32914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34673
	I0425 19:07:04.147740   32914 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:07:04.148268   32914 main.go:141] libmachine: Using API Version  1
	I0425 19:07:04.148304   32914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:07:04.148681   32914 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:07:04.148880   32914 main.go:141] libmachine: (ha-912667-m04) Calling .GetIP
	I0425 19:07:04.151641   32914 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 19:07:04.152076   32914 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 20:04:26 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 19:07:04.152105   32914 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 19:07:04.152240   32914 host.go:66] Checking if "ha-912667-m04" exists ...
	I0425 19:07:04.152521   32914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:07:04.152556   32914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:07:04.166522   32914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0425 19:07:04.166926   32914 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:07:04.167399   32914 main.go:141] libmachine: Using API Version  1
	I0425 19:07:04.167419   32914 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:07:04.167685   32914 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:07:04.167849   32914 main.go:141] libmachine: (ha-912667-m04) Calling .DriverName
	I0425 19:07:04.168016   32914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 19:07:04.168032   32914 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHHostname
	I0425 19:07:04.170480   32914 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 19:07:04.170900   32914 main.go:141] libmachine: (ha-912667-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:54:c9", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 20:04:26 +0000 UTC Type:0 Mac:52:54:00:a3:54:c9 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-912667-m04 Clientid:01:52:54:00:a3:54:c9}
	I0425 19:07:04.170927   32914 main.go:141] libmachine: (ha-912667-m04) DBG | domain ha-912667-m04 has defined IP address 192.168.39.232 and MAC address 52:54:00:a3:54:c9 in network mk-ha-912667
	I0425 19:07:04.171084   32914 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHPort
	I0425 19:07:04.171249   32914 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHKeyPath
	I0425 19:07:04.171415   32914 main.go:141] libmachine: (ha-912667-m04) Calling .GetSSHUsername
	I0425 19:07:04.171560   32914 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667-m04/id_rsa Username:docker}
	W0425 19:07:22.570420   32914 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.232:22: connect: no route to host
	W0425 19:07:22.570511   32914 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0425 19:07:22.570534   32914 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0425 19:07:22.570545   32914 status.go:257] ha-912667-m04 status: &{Name:ha-912667-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0425 19:07:22.570569   32914 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-912667 -n ha-912667
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-912667 logs -n 25: (1.925179518s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                      |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-912667 ssh -n ha-912667-m02 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m03_ha-912667-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04:/home/docker/cp-test_ha-912667-m03_ha-912667-m04.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m04 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m03_ha-912667-m04.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp testdata/cp-test.txt                                              | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04:/home/docker/cp-test.txt                                         |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile60710412/001/cp-test_ha-912667-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667:/home/docker/cp-test_ha-912667-m04_ha-912667.txt                     |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667 sudo cat                                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667.txt                               |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m02:/home/docker/cp-test_ha-912667-m04_ha-912667-m02.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m02 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667-m02.txt                           |           |         |         |                     |                     |
	| cp      | ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m03:/home/docker/cp-test_ha-912667-m04_ha-912667-m03.txt             |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n                                                               | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | ha-912667-m04 sudo cat                                                         |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                       |           |         |         |                     |                     |
	| ssh     | ha-912667 ssh -n ha-912667-m03 sudo cat                                        | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC | 25 Apr 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-912667-m04_ha-912667-m03.txt                           |           |         |         |                     |                     |
	| node    | ha-912667 node stop m02 -v=7                                                   | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:55 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | ha-912667 node start m02 -v=7                                                  | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:57 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-912667 -v=7                                                         | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:58 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | -p ha-912667 -v=7                                                              | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 18:58 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| start   | -p ha-912667 --wait=true -v=7                                                  | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 19:00 UTC | 25 Apr 24 19:04 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| node    | list -p ha-912667                                                              | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 19:04 UTC |                     |
	| node    | ha-912667 node delete m03 -v=7                                                 | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 19:04 UTC | 25 Apr 24 19:05 UTC |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	| stop    | ha-912667 stop -v=7                                                            | ha-912667 | jenkins | v1.33.0 | 25 Apr 24 19:05 UTC |                     |
	|         | --alsologtostderr                                                              |           |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:00:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:00:35.714252   30712 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:00:35.714369   30712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:00:35.714379   30712 out.go:304] Setting ErrFile to fd 2...
	I0425 19:00:35.714384   30712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:00:35.714602   30712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:00:35.715150   30712 out.go:298] Setting JSON to false
	I0425 19:00:35.716127   30712 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2582,"bootTime":1714069054,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:00:35.716188   30712 start.go:139] virtualization: kvm guest
	I0425 19:00:35.718896   30712 out.go:177] * [ha-912667] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:00:35.720707   30712 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:00:35.720692   30712 notify.go:220] Checking for updates...
	I0425 19:00:35.722721   30712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:00:35.724284   30712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:00:35.725817   30712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:00:35.727182   30712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:00:35.728662   30712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:00:35.730356   30712 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:00:35.730445   30712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:00:35.730817   30712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:00:35.730852   30712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:00:35.748209   30712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0425 19:00:35.748639   30712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:00:35.749107   30712 main.go:141] libmachine: Using API Version  1
	I0425 19:00:35.749125   30712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:00:35.749450   30712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:00:35.749620   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:00:35.783601   30712 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:00:35.785049   30712 start.go:297] selected driver: kvm2
	I0425 19:00:35.785063   30712 start.go:901] validating driver "kvm2" against &{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.232 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:00:35.785250   30712 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:00:35.785768   30712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:00:35.785889   30712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:00:35.799984   30712 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:00:35.800891   30712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:00:35.800965   30712 cni.go:84] Creating CNI manager for ""
	I0425 19:00:35.800981   30712 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0425 19:00:35.801044   30712 start.go:340] cluster config:
	{Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.232 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:00:35.801216   30712 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:00:35.803114   30712 out.go:177] * Starting "ha-912667" primary control-plane node in "ha-912667" cluster
	I0425 19:00:35.804518   30712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:00:35.804546   30712 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:00:35.804552   30712 cache.go:56] Caching tarball of preloaded images
	I0425 19:00:35.804628   30712 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:00:35.804643   30712 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 19:00:35.804773   30712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/config.json ...
	I0425 19:00:35.804953   30712 start.go:360] acquireMachinesLock for ha-912667: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:00:35.805000   30712 start.go:364] duration metric: took 31.437µs to acquireMachinesLock for "ha-912667"
	I0425 19:00:35.805014   30712 start.go:96] Skipping create...Using existing machine configuration
	I0425 19:00:35.805021   30712 fix.go:54] fixHost starting: 
	I0425 19:00:35.805256   30712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:00:35.805283   30712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:00:35.819008   30712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
	I0425 19:00:35.819412   30712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:00:35.819874   30712 main.go:141] libmachine: Using API Version  1
	I0425 19:00:35.819890   30712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:00:35.820150   30712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:00:35.820331   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:00:35.820453   30712 main.go:141] libmachine: (ha-912667) Calling .GetState
	I0425 19:00:35.821988   30712 fix.go:112] recreateIfNeeded on ha-912667: state=Running err=<nil>
	W0425 19:00:35.822008   30712 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 19:00:35.824006   30712 out.go:177] * Updating the running kvm2 "ha-912667" VM ...
	I0425 19:00:35.825258   30712 machine.go:94] provisionDockerMachine start ...
	I0425 19:00:35.825278   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:00:35.825446   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:35.827950   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:35.828404   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:35.828431   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:35.828551   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:00:35.828720   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:35.828917   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:35.829038   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:00:35.829180   30712 main.go:141] libmachine: Using SSH client type: native
	I0425 19:00:35.829372   30712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 19:00:35.829387   30712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 19:00:35.945081   30712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-912667
	
	I0425 19:00:35.945101   30712 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 19:00:35.945336   30712 buildroot.go:166] provisioning hostname "ha-912667"
	I0425 19:00:35.945360   30712 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 19:00:35.945537   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:35.948199   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:35.948542   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:35.948575   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:35.948774   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:00:35.948935   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:35.949139   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:35.949265   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:00:35.949432   30712 main.go:141] libmachine: Using SSH client type: native
	I0425 19:00:35.949586   30712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 19:00:35.949598   30712 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-912667 && echo "ha-912667" | sudo tee /etc/hostname
	I0425 19:00:36.072731   30712 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-912667
	
	I0425 19:00:36.072760   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:36.075474   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.075793   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:36.075816   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.076045   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:00:36.076253   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:36.076421   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:36.076607   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:00:36.076784   30712 main.go:141] libmachine: Using SSH client type: native
	I0425 19:00:36.076945   30712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 19:00:36.076961   30712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-912667' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-912667/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-912667' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 19:00:36.187933   30712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:00:36.187965   30712 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 19:00:36.188013   30712 buildroot.go:174] setting up certificates
	I0425 19:00:36.188034   30712 provision.go:84] configureAuth start
	I0425 19:00:36.188056   30712 main.go:141] libmachine: (ha-912667) Calling .GetMachineName
	I0425 19:00:36.188394   30712 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 19:00:36.191154   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.191573   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:36.191609   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.191762   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:36.193980   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.194339   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:36.194363   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.194550   30712 provision.go:143] copyHostCerts
	I0425 19:00:36.194582   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:00:36.194624   30712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 19:00:36.194636   30712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:00:36.194716   30712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 19:00:36.194823   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:00:36.194849   30712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 19:00:36.194856   30712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:00:36.194899   30712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 19:00:36.194958   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:00:36.194989   30712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 19:00:36.194998   30712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:00:36.195031   30712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 19:00:36.195092   30712 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.ha-912667 san=[127.0.0.1 192.168.39.189 ha-912667 localhost minikube]
	I0425 19:00:36.404154   30712 provision.go:177] copyRemoteCerts
	I0425 19:00:36.404229   30712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 19:00:36.404255   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:36.406916   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.407260   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:36.407284   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.407436   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:00:36.407622   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:36.407782   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:00:36.407897   30712 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 19:00:36.493180   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 19:00:36.493259   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 19:00:36.522703   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 19:00:36.522805   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0425 19:00:36.552554   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 19:00:36.552639   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 19:00:36.581173   30712 provision.go:87] duration metric: took 393.12035ms to configureAuth
	I0425 19:00:36.581202   30712 buildroot.go:189] setting minikube options for container-runtime
	I0425 19:00:36.581439   30712 config.go:182] Loaded profile config "ha-912667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:00:36.581534   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:00:36.583938   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.584312   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:00:36.584339   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:00:36.584545   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:00:36.584748   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:36.584928   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:00:36.585061   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:00:36.585212   30712 main.go:141] libmachine: Using SSH client type: native
	I0425 19:00:36.585382   30712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 19:00:36.585399   30712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 19:02:07.501252   30712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 19:02:07.501281   30712 machine.go:97] duration metric: took 1m31.676011283s to provisionDockerMachine
	I0425 19:02:07.501295   30712 start.go:293] postStartSetup for "ha-912667" (driver="kvm2")
	I0425 19:02:07.501307   30712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 19:02:07.501322   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.501668   30712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 19:02:07.501702   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:02:07.504671   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.505070   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.505096   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.505309   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:02:07.505509   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.505640   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:02:07.505760   30712 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 19:02:07.591385   30712 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 19:02:07.597458   30712 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 19:02:07.597484   30712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 19:02:07.597542   30712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 19:02:07.597606   30712 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 19:02:07.597617   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 19:02:07.597693   30712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 19:02:07.609135   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:02:07.635515   30712 start.go:296] duration metric: took 134.203777ms for postStartSetup
	I0425 19:02:07.635567   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.635886   30712 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0425 19:02:07.635918   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:02:07.638341   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.638711   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.638737   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.638889   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:02:07.639069   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.639186   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:02:07.639322   30712 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	W0425 19:02:07.722422   30712 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0425 19:02:07.722449   30712 fix.go:56] duration metric: took 1m31.917426388s for fixHost
	I0425 19:02:07.722474   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:02:07.724768   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.725163   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.725193   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.725300   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:02:07.725460   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.725610   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.725786   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:02:07.725951   30712 main.go:141] libmachine: Using SSH client type: native
	I0425 19:02:07.726114   30712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I0425 19:02:07.726125   30712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 19:02:07.831970   30712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714071727.806375549
	
	I0425 19:02:07.831992   30712 fix.go:216] guest clock: 1714071727.806375549
	I0425 19:02:07.831999   30712 fix.go:229] Guest: 2024-04-25 19:02:07.806375549 +0000 UTC Remote: 2024-04-25 19:02:07.722458379 +0000 UTC m=+92.060875887 (delta=83.91717ms)
	I0425 19:02:07.832035   30712 fix.go:200] guest clock delta is within tolerance: 83.91717ms
	I0425 19:02:07.832040   30712 start.go:83] releasing machines lock for "ha-912667", held for 1m32.027031339s
	I0425 19:02:07.832059   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.832326   30712 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 19:02:07.835035   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.835420   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.835451   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.835569   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.836152   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.836348   30712 main.go:141] libmachine: (ha-912667) Calling .DriverName
	I0425 19:02:07.836447   30712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 19:02:07.836488   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:02:07.836600   30712 ssh_runner.go:195] Run: cat /version.json
	I0425 19:02:07.836631   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHHostname
	I0425 19:02:07.839030   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.839373   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.839401   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.839423   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.839525   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:02:07.839694   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.839841   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:02:07.839869   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:07.839900   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:07.839991   30712 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 19:02:07.840033   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHPort
	I0425 19:02:07.840178   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHKeyPath
	I0425 19:02:07.840337   30712 main.go:141] libmachine: (ha-912667) Calling .GetSSHUsername
	I0425 19:02:07.840477   30712 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/ha-912667/id_rsa Username:docker}
	I0425 19:02:07.920227   30712 ssh_runner.go:195] Run: systemctl --version
	I0425 19:02:07.943706   30712 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 19:02:08.110896   30712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 19:02:08.120528   30712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 19:02:08.120601   30712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 19:02:08.132132   30712 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0425 19:02:08.132152   30712 start.go:494] detecting cgroup driver to use...
	I0425 19:02:08.132214   30712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 19:02:08.152138   30712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 19:02:08.167744   30712 docker.go:217] disabling cri-docker service (if available) ...
	I0425 19:02:08.167816   30712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 19:02:08.184986   30712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 19:02:08.202823   30712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 19:02:08.359055   30712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 19:02:08.514315   30712 docker.go:233] disabling docker service ...
	I0425 19:02:08.514379   30712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 19:02:08.532743   30712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 19:02:08.547170   30712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 19:02:08.700369   30712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 19:02:08.854817   30712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 19:02:08.871547   30712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 19:02:08.895252   30712 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 19:02:08.895340   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.907619   30712 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 19:02:08.907691   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.919696   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.931320   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.942616   30712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 19:02:08.954598   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.965787   30712 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.978477   30712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:02:08.989743   30712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 19:02:09.000038   30712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 19:02:09.009796   30712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:02:09.163791   30712 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 19:02:10.078544   30712 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 19:02:10.078620   30712 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 19:02:10.084969   30712 start.go:562] Will wait 60s for crictl version
	I0425 19:02:10.085047   30712 ssh_runner.go:195] Run: which crictl
	I0425 19:02:10.089776   30712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 19:02:10.140486   30712 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 19:02:10.140588   30712 ssh_runner.go:195] Run: crio --version
	I0425 19:02:10.173563   30712 ssh_runner.go:195] Run: crio --version
	I0425 19:02:10.209225   30712 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 19:02:10.210556   30712 main.go:141] libmachine: (ha-912667) Calling .GetIP
	I0425 19:02:10.213233   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:10.213577   30712 main.go:141] libmachine: (ha-912667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:04:73", ip: ""} in network mk-ha-912667: {Iface:virbr1 ExpiryTime:2024-04-25 19:49:51 +0000 UTC Type:0 Mac:52:54:00:f2:04:73 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:ha-912667 Clientid:01:52:54:00:f2:04:73}
	I0425 19:02:10.213606   30712 main.go:141] libmachine: (ha-912667) DBG | domain ha-912667 has defined IP address 192.168.39.189 and MAC address 52:54:00:f2:04:73 in network mk-ha-912667
	I0425 19:02:10.213810   30712 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 19:02:10.219074   30712 kubeadm.go:877] updating cluster {Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.232 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 19:02:10.219190   30712 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:02:10.219226   30712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:02:10.269189   30712 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:02:10.269209   30712 crio.go:433] Images already preloaded, skipping extraction
	I0425 19:02:10.269256   30712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:02:10.307094   30712 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:02:10.307113   30712 cache_images.go:84] Images are preloaded, skipping loading
	I0425 19:02:10.307121   30712 kubeadm.go:928] updating node { 192.168.39.189 8443 v1.30.0 crio true true} ...
	I0425 19:02:10.307221   30712 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-912667 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 19:02:10.307287   30712 ssh_runner.go:195] Run: crio config
	I0425 19:02:10.364052   30712 cni.go:84] Creating CNI manager for ""
	I0425 19:02:10.364072   30712 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0425 19:02:10.364083   30712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 19:02:10.364102   30712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-912667 NodeName:ha-912667 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 19:02:10.364231   30712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-912667"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.189
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 19:02:10.364256   30712 kube-vip.go:111] generating kube-vip config ...
	I0425 19:02:10.364292   30712 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0425 19:02:10.379175   30712 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0425 19:02:10.379273   30712 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0425 19:02:10.379320   30712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 19:02:10.390441   30712 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 19:02:10.390497   30712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0425 19:02:10.402548   30712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0425 19:02:10.422214   30712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 19:02:10.440272   30712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0425 19:02:10.458608   30712 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0425 19:02:10.479035   30712 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0425 19:02:10.483541   30712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:02:10.649521   30712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:02:10.666543   30712 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667 for IP: 192.168.39.189
	I0425 19:02:10.666567   30712 certs.go:194] generating shared ca certs ...
	I0425 19:02:10.666588   30712 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:02:10.666764   30712 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 19:02:10.666838   30712 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 19:02:10.666851   30712 certs.go:256] generating profile certs ...
	I0425 19:02:10.666958   30712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/client.key
	I0425 19:02:10.666995   30712 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.5d430312
	I0425 19:02:10.667011   30712 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.5d430312 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189 192.168.39.66 192.168.39.179 192.168.39.254]
	I0425 19:02:10.846879   30712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.5d430312 ...
	I0425 19:02:10.846911   30712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.5d430312: {Name:mk7d97a128946db98f43e52607d66bc2c3314779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:02:10.847075   30712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.5d430312 ...
	I0425 19:02:10.847087   30712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.5d430312: {Name:mk4287911b1bba38d86f72f1ea7d421bb210d31c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:02:10.847157   30712 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt.5d430312 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt
	I0425 19:02:10.847310   30712 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key.5d430312 -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key
	I0425 19:02:10.847437   30712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key
	I0425 19:02:10.847451   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 19:02:10.847464   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 19:02:10.847479   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 19:02:10.847492   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 19:02:10.847506   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 19:02:10.847518   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 19:02:10.847528   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 19:02:10.847541   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 19:02:10.847584   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 19:02:10.847609   30712 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 19:02:10.847619   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 19:02:10.847651   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 19:02:10.847673   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 19:02:10.847692   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 19:02:10.847727   30712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:02:10.847755   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 19:02:10.847770   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 19:02:10.847782   30712 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:02:10.848343   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 19:02:10.879752   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 19:02:10.910020   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 19:02:10.939462   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 19:02:10.968462   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0425 19:02:10.997075   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 19:02:11.026613   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 19:02:11.054919   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/ha-912667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0425 19:02:11.083524   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 19:02:11.112094   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 19:02:11.137420   30712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 19:02:11.165135   30712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 19:02:11.184099   30712 ssh_runner.go:195] Run: openssl version
	I0425 19:02:11.190963   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 19:02:11.203003   30712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 19:02:11.208176   30712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 19:02:11.208228   30712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 19:02:11.215287   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 19:02:11.226501   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 19:02:11.239446   30712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 19:02:11.245052   30712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:02:11.245124   30712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 19:02:11.251698   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 19:02:11.263482   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 19:02:11.276255   30712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:02:11.282115   30712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:02:11.282183   30712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:02:11.289037   30712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 19:02:11.300467   30712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 19:02:11.306162   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 19:02:11.313406   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 19:02:11.319784   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 19:02:11.326282   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 19:02:11.332797   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 19:02:11.339707   30712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 19:02:11.347446   30712 kubeadm.go:391] StartCluster: {Name:ha-912667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-912667 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.66 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.179 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.232 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:02:11.347573   30712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 19:02:11.347614   30712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 19:02:11.399260   30712 cri.go:89] found id: "1211fe8cf15a145726136383a04b807104fd7b5d177b97cd1a5a6edae325cf97"
	I0425 19:02:11.399281   30712 cri.go:89] found id: "ef1831847cd85fa4ac3e3f05b1280b29e6a5a53ca491342d6634a119e3dff4f4"
	I0425 19:02:11.399285   30712 cri.go:89] found id: "65857c225af2b5971d31044aaaa5a7c2b1134e809bd7c368565df21afa7b2735"
	I0425 19:02:11.399289   30712 cri.go:89] found id: "7b85242a1dd03e4116bf4a4a811d120c72ac40179e8fde0fe2d73503f49c8737"
	I0425 19:02:11.399292   30712 cri.go:89] found id: "8479138ced5e5a6b00b685a1538c683197de7083d857d194836fcffa26fc2cfb"
	I0425 19:02:11.399295   30712 cri.go:89] found id: "853ae533d68261b7aaa8b7604ae60d64f17d8fa31a0f38accbfb5a4fc7f51012"
	I0425 19:02:11.399297   30712 cri.go:89] found id: "5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786"
	I0425 19:02:11.399300   30712 cri.go:89] found id: "877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275"
	I0425 19:02:11.399304   30712 cri.go:89] found id: "35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386"
	I0425 19:02:11.399310   30712 cri.go:89] found id: "e24e946cc9871d59976b6e84efd38da336416d3442e75673080a8e5eb92ed6d4"
	I0425 19:02:11.399318   30712 cri.go:89] found id: "6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5"
	I0425 19:02:11.399323   30712 cri.go:89] found id: "860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f"
	I0425 19:02:11.399327   30712 cri.go:89] found id: "9c0bd11b87eb333fd5fc61ff4ff42398c82950042ca9c1eef36b928098deee98"
	I0425 19:02:11.399331   30712 cri.go:89] found id: "8ab9c0712a08a952bf137667fd232b693ff4b86e62a807e3a5287def0334f353"
	I0425 19:02:11.399339   30712 cri.go:89] found id: ""
	I0425 19:02:11.399384   30712 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.231518356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714072043231490644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a7a1849-4a69-4a9c-950b-ce2c2434847d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.233771738Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36e1081e-3bf8-41ca-8f84-d482f8b2286e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.233996886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36e1081e-3bf8-41ca-8f84-d482f8b2286e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.234801880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:664d121edb6b713211c13c0cedfbd4e6ff816158d01902cca3a3dc628d413f71,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714071817893557387,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b482009bb4bda86ed80aaf6ffbbdaeac0d3c80aac4919534d3d93ff7a0cfd128,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071802893423666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35af403e5f5b77c282e2ab8be29c6a089e75d1f1c8a54fd06c0799c3de43e0d1,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071777898678801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9375cf649d3fdd55b73e4c5640030d0b39a95f084260b601490e3388f4820a6a,PodSandboxId:2b8901f4a6c6a571896ff7dd2b68466ed43867b879bde5af06d0be6b525dc65d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071770037840128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1257292410198065593c6f0f876643b3d20d2dd3e8011891b55d35e4758d63,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071768185327503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd34a8712f61ebcf0d16486428c1a8ae453956567861a37e43f74936bb9d32f,PodSandboxId:d3c2d7d029f167c48c7289d45bccaf1c339aed778ac71b4d716cd26fce459c95,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071751848498714,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469b88169de51b24d813181338c887bc,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:9a710c78ee141c7c5c9eb1a047b80fdb89959cf74148c464b8565c4350725fea,PodSandboxId:f2c84e148f9ed49d3c243d2f4ac490df3be9fdd31e14b148d7b417aaf79b7837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071737446777323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:7666e74773b8beaa8c78a2cffc10db8d396168ac8eb484af76b1d5dad8cdf736,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714071737402882286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380ad5799
738ff5b76de6315d59529ac9c8a67ba2e59ae5eead7ec951d80f6b7,PodSandboxId:c14af9e5af973eadc39cc9450066a894ed0fc80b6553e93c87ffacafc89f2c87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736619518745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714071736691086314,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8279db081c114756c0ef4369b7f2dcd81110abcda6769ff15356ef16d82899f,PodSandboxId:a49728b483c24f26ec07260fa0afa5e2160b2520c679e2d60b5d5bda447d6150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736643374877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3997d681dd3c6abf6fecf3119895f445d9d960e69cc4d6b33b77f4313810dda6,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714071736590333518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f799a7e1725b4f3b7c0a031b6fada2efc97f1662c8c5d5759c4beedb20b3807,PodSandboxId:fa91a613ac5de27f3594fc1fb14797d03ecfab3c4f49bca5b9135600c41cbfb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071736462151375,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b5eacd47457075997143150b5a47f1e32bc6ab5272420955b83158111ce6a3,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714071736420443998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74
c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e670ab4471745be8eecfacef997853b6afd5e8508a46b249cc8831adbbaf33,PodSandboxId:ac490e91cdf368f8ebbad78a2c6ce66b8f402bcf55de23c1889a0f0e2e13dfb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071736407586084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kuberne
tes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714071248602464773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernete
s.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034742632420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034727910480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714071032735268131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714071012728272369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714071012719284298,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36e1081e-3bf8-41ca-8f84-d482f8b2286e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.292928923Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86cf7123-d865-4cfd-964d-95810981ec8b name=/runtime.v1.RuntimeService/Version
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.293030995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86cf7123-d865-4cfd-964d-95810981ec8b name=/runtime.v1.RuntimeService/Version
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.294914177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2bd7f7c1-b92e-4889-9204-109263edecc2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.295349384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714072043295323201,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bd7f7c1-b92e-4889-9204-109263edecc2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.296133204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37ed1c46-4272-40fa-ab04-0d4f80c52f65 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.296219386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37ed1c46-4272-40fa-ab04-0d4f80c52f65 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.296659096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:664d121edb6b713211c13c0cedfbd4e6ff816158d01902cca3a3dc628d413f71,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714071817893557387,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b482009bb4bda86ed80aaf6ffbbdaeac0d3c80aac4919534d3d93ff7a0cfd128,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071802893423666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35af403e5f5b77c282e2ab8be29c6a089e75d1f1c8a54fd06c0799c3de43e0d1,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071777898678801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9375cf649d3fdd55b73e4c5640030d0b39a95f084260b601490e3388f4820a6a,PodSandboxId:2b8901f4a6c6a571896ff7dd2b68466ed43867b879bde5af06d0be6b525dc65d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071770037840128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1257292410198065593c6f0f876643b3d20d2dd3e8011891b55d35e4758d63,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071768185327503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd34a8712f61ebcf0d16486428c1a8ae453956567861a37e43f74936bb9d32f,PodSandboxId:d3c2d7d029f167c48c7289d45bccaf1c339aed778ac71b4d716cd26fce459c95,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071751848498714,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469b88169de51b24d813181338c887bc,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:9a710c78ee141c7c5c9eb1a047b80fdb89959cf74148c464b8565c4350725fea,PodSandboxId:f2c84e148f9ed49d3c243d2f4ac490df3be9fdd31e14b148d7b417aaf79b7837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071737446777323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:7666e74773b8beaa8c78a2cffc10db8d396168ac8eb484af76b1d5dad8cdf736,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714071737402882286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380ad5799
738ff5b76de6315d59529ac9c8a67ba2e59ae5eead7ec951d80f6b7,PodSandboxId:c14af9e5af973eadc39cc9450066a894ed0fc80b6553e93c87ffacafc89f2c87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736619518745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714071736691086314,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8279db081c114756c0ef4369b7f2dcd81110abcda6769ff15356ef16d82899f,PodSandboxId:a49728b483c24f26ec07260fa0afa5e2160b2520c679e2d60b5d5bda447d6150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736643374877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3997d681dd3c6abf6fecf3119895f445d9d960e69cc4d6b33b77f4313810dda6,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714071736590333518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f799a7e1725b4f3b7c0a031b6fada2efc97f1662c8c5d5759c4beedb20b3807,PodSandboxId:fa91a613ac5de27f3594fc1fb14797d03ecfab3c4f49bca5b9135600c41cbfb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071736462151375,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b5eacd47457075997143150b5a47f1e32bc6ab5272420955b83158111ce6a3,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714071736420443998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74
c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e670ab4471745be8eecfacef997853b6afd5e8508a46b249cc8831adbbaf33,PodSandboxId:ac490e91cdf368f8ebbad78a2c6ce66b8f402bcf55de23c1889a0f0e2e13dfb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071736407586084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kuberne
tes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714071248602464773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernete
s.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034742632420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034727910480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714071032735268131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714071012728272369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714071012719284298,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37ed1c46-4272-40fa-ab04-0d4f80c52f65 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.346982691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20c16f43-1d72-42d9-9e76-18cd54f45306 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.347060894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20c16f43-1d72-42d9-9e76-18cd54f45306 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.348433395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b74227a7-c78d-4b0f-8168-fec8faedf445 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.349116019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714072043349086005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b74227a7-c78d-4b0f-8168-fec8faedf445 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.349760657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=882fd012-5791-4086-a0c1-bf9912cb58d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.349878089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=882fd012-5791-4086-a0c1-bf9912cb58d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.350591227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:664d121edb6b713211c13c0cedfbd4e6ff816158d01902cca3a3dc628d413f71,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714071817893557387,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b482009bb4bda86ed80aaf6ffbbdaeac0d3c80aac4919534d3d93ff7a0cfd128,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071802893423666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35af403e5f5b77c282e2ab8be29c6a089e75d1f1c8a54fd06c0799c3de43e0d1,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071777898678801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9375cf649d3fdd55b73e4c5640030d0b39a95f084260b601490e3388f4820a6a,PodSandboxId:2b8901f4a6c6a571896ff7dd2b68466ed43867b879bde5af06d0be6b525dc65d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071770037840128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1257292410198065593c6f0f876643b3d20d2dd3e8011891b55d35e4758d63,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071768185327503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd34a8712f61ebcf0d16486428c1a8ae453956567861a37e43f74936bb9d32f,PodSandboxId:d3c2d7d029f167c48c7289d45bccaf1c339aed778ac71b4d716cd26fce459c95,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071751848498714,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469b88169de51b24d813181338c887bc,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:9a710c78ee141c7c5c9eb1a047b80fdb89959cf74148c464b8565c4350725fea,PodSandboxId:f2c84e148f9ed49d3c243d2f4ac490df3be9fdd31e14b148d7b417aaf79b7837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071737446777323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:7666e74773b8beaa8c78a2cffc10db8d396168ac8eb484af76b1d5dad8cdf736,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714071737402882286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380ad5799
738ff5b76de6315d59529ac9c8a67ba2e59ae5eead7ec951d80f6b7,PodSandboxId:c14af9e5af973eadc39cc9450066a894ed0fc80b6553e93c87ffacafc89f2c87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736619518745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714071736691086314,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8279db081c114756c0ef4369b7f2dcd81110abcda6769ff15356ef16d82899f,PodSandboxId:a49728b483c24f26ec07260fa0afa5e2160b2520c679e2d60b5d5bda447d6150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736643374877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3997d681dd3c6abf6fecf3119895f445d9d960e69cc4d6b33b77f4313810dda6,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714071736590333518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f799a7e1725b4f3b7c0a031b6fada2efc97f1662c8c5d5759c4beedb20b3807,PodSandboxId:fa91a613ac5de27f3594fc1fb14797d03ecfab3c4f49bca5b9135600c41cbfb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071736462151375,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b5eacd47457075997143150b5a47f1e32bc6ab5272420955b83158111ce6a3,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714071736420443998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74
c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e670ab4471745be8eecfacef997853b6afd5e8508a46b249cc8831adbbaf33,PodSandboxId:ac490e91cdf368f8ebbad78a2c6ce66b8f402bcf55de23c1889a0f0e2e13dfb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071736407586084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kuberne
tes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714071248602464773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernete
s.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034742632420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034727910480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714071032735268131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714071012728272369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714071012719284298,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=882fd012-5791-4086-a0c1-bf9912cb58d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.403856338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=831aba8f-1f00-41ab-9587-e4d0183a7fb4 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.403960709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=831aba8f-1f00-41ab-9587-e4d0183a7fb4 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.405861486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b86e796c-8e8f-4753-9c00-46ba74031713 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.406356455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714072043406329969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b86e796c-8e8f-4753-9c00-46ba74031713 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.407625610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a957dbd6-b12b-47e0-bd80-9d8cac7e0112 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.407802360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a957dbd6-b12b-47e0-bd80-9d8cac7e0112 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:07:23 ha-912667 crio[3935]: time="2024-04-25 19:07:23.408268904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:664d121edb6b713211c13c0cedfbd4e6ff816158d01902cca3a3dc628d413f71,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714071817893557387,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b482009bb4bda86ed80aaf6ffbbdaeac0d3c80aac4919534d3d93ff7a0cfd128,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714071802893423666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35af403e5f5b77c282e2ab8be29c6a089e75d1f1c8a54fd06c0799c3de43e0d1,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714071777898678801,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9375cf649d3fdd55b73e4c5640030d0b39a95f084260b601490e3388f4820a6a,PodSandboxId:2b8901f4a6c6a571896ff7dd2b68466ed43867b879bde5af06d0be6b525dc65d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714071770037840128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernetes.container.hash: b23919e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be1257292410198065593c6f0f876643b3d20d2dd3e8011891b55d35e4758d63,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714071768185327503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bd34a8712f61ebcf0d16486428c1a8ae453956567861a37e43f74936bb9d32f,PodSandboxId:d3c2d7d029f167c48c7289d45bccaf1c339aed778ac71b4d716cd26fce459c95,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1714071751848498714,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469b88169de51b24d813181338c887bc,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:9a710c78ee141c7c5c9eb1a047b80fdb89959cf74148c464b8565c4350725fea,PodSandboxId:f2c84e148f9ed49d3c243d2f4ac490df3be9fdd31e14b148d7b417aaf79b7837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714071737446777323,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:7666e74773b8beaa8c78a2cffc10db8d396168ac8eb484af76b1d5dad8cdf736,PodSandboxId:89a08f34ca2427b8ed87b0271d356ed1319154edd4cb2d594ed239113991c5a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714071737402882286,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3a0b111-609d-49b3-a056-71eb4b641224,},Annotations:map[string]string{io.kubernetes.container.hash: 731b3ea5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:380ad5799
738ff5b76de6315d59529ac9c8a67ba2e59ae5eead7ec951d80f6b7,PodSandboxId:c14af9e5af973eadc39cc9450066a894ed0fc80b6553e93c87ffacafc89f2c87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736619518745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef,PodSandboxId:8aa35c9f3e53f2672890fce833396c891e6985f856a05cf1ae56fbfc467293e3,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714071736691086314,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xlvjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 191ff28e-07d7-459e-afe5-e3d8c23e1016,},Annotations:map[string]string{io.kubernetes.container.hash: cf239fdf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8279db081c114756c0ef4369b7f2dcd81110abcda6769ff15356ef16d82899f,PodSandboxId:a49728b483c24f26ec07260fa0afa5e2160b2520c679e2d60b5d5bda447d6150,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714071736643374877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dn
s-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3997d681dd3c6abf6fecf3119895f445d9d960e69cc4d6b33b77f4313810dda6,PodSandboxId:2e26a50c7fc42e1a1d95a6878712449d2af716097143b48a3fa10713e0e0000a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714071736590333518,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-912667,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 0f8eae540ae6f75803c1cce277c135c8,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f799a7e1725b4f3b7c0a031b6fada2efc97f1662c8c5d5759c4beedb20b3807,PodSandboxId:fa91a613ac5de27f3594fc1fb14797d03ecfab3c4f49bca5b9135600c41cbfb5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714071736462151375,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
2d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b5eacd47457075997143150b5a47f1e32bc6ab5272420955b83158111ce6a3,PodSandboxId:0e0656ef80264a322dd87aa79cc461c05903163a05eb35c8b3fce5a3b4e8391e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714071736420443998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef9d6e5decdc8ee65e0e74
c73411380,},Annotations:map[string]string{io.kubernetes.container.hash: d9e4b59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e670ab4471745be8eecfacef997853b6afd5e8508a46b249cc8831adbbaf33,PodSandboxId:ac490e91cdf368f8ebbad78a2c6ce66b8f402bcf55de23c1889a0f0e2e13dfb6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714071736407586084,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kuberne
tes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb806d6102b91812ca156c47b7a241b5ded687c9a806ca2f3d5820b7daa026ca,PodSandboxId:4a7d7ef3e980ee5356b9954c65a405acd4f25bba6c24ad8cf7f61388bf465b6c,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714071248602464773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nxhjn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eb1062c1-8c87-4e99-80a2-a114d2e0c709,},Annotations:map[string]string{io.kubernete
s.container.hash: b23919e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786,PodSandboxId:5f41aaba12a45578c3f25cc9b08c07d7399392b5173115d776a1ba8d8e45d66b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034742632420,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-22wvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56a5b1eb-a6a7-4105-b8b5-7aa731b2b23e,},Annotations:map[string]string{io.kubernetes.container.hash: 6d157d08,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275,PodSandboxId:7eff20f80efe1e8d16783a61a1d077db303f0af1f11e734ec33dbdcd88956d1a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714071034727910480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-h4s2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9e2233c-5350-47ab-bdae-6fa35972b601,},Annotations:map[string]string{io.kubernetes.container.hash: 7f571be0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386,PodSandboxId:56d2b6ff099a094e336b31ab948f4a40f6e098fe372082da9a1d14a0b38d6ea1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714071032735268131,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkgv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bf1cac1-1e11-4667-8d35-8a0bbbd40a6a,},Annotations:map[string]string{io.kubernetes.container.hash: a369a1d7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5,PodSandboxId:10902ac1c9f4f35f0c65692f0a4c3994762a01ec2425b5d154d591658173f3c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8
b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714071012728272369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92d273ee11723a3e0ac3b49ca2112419,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f,PodSandboxId:b27e008a10a0673fffbd1eace2e2656465f9382638925e4dac21d84b39aabfe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAIN
ER_EXITED,CreatedAt:1714071012719284298,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-912667,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63dc5c47bed909879d47a4fe5ebbb9a,},Annotations:map[string]string{io.kubernetes.container.hash: 37dcfd15,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a957dbd6-b12b-47e0-bd80-9d8cac7e0112 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	664d121edb6b7       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               4                   8aa35c9f3e53f       kindnet-xlvjt
	b482009bb4bda       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   89a08f34ca242       storage-provisioner
	35af403e5f5b7       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      4 minutes ago       Running             kube-controller-manager   2                   2e26a50c7fc42       kube-controller-manager-ha-912667
	9375cf649d3fd       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   2b8901f4a6c6a       busybox-fc5497c4f-nxhjn
	be12572924101       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      4 minutes ago       Running             kube-apiserver            3                   0e0656ef80264       kube-apiserver-ha-912667
	2bd34a8712f61       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago       Running             kube-vip                  0                   d3c2d7d029f16       kube-vip-ha-912667
	9a710c78ee141       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                1                   f2c84e148f9ed       kube-proxy-mkgv5
	7666e74773b8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   89a08f34ca242       storage-provisioner
	15d248c866f48       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               3                   8aa35c9f3e53f       kindnet-xlvjt
	d8279db081c11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   a49728b483c24       coredns-7db6d8ff4d-h4s2h
	380ad5799738f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   c14af9e5af973       coredns-7db6d8ff4d-22wvx
	3997d681dd3c6       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Exited              kube-controller-manager   1                   2e26a50c7fc42       kube-controller-manager-ha-912667
	5f799a7e1725b       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      5 minutes ago       Running             kube-scheduler            1                   fa91a613ac5de       kube-scheduler-ha-912667
	62b5eacd47457       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Exited              kube-apiserver            2                   0e0656ef80264       kube-apiserver-ha-912667
	74e670ab44717       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   ac490e91cdf36       etcd-ha-912667
	cb806d6102b91       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   4a7d7ef3e980e       busybox-fc5497c4f-nxhjn
	5b5e973107f16       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   5f41aaba12a45       coredns-7db6d8ff4d-22wvx
	877510603b828       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   7eff20f80efe1       coredns-7db6d8ff4d-h4s2h
	35f0443a12a2f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      16 minutes ago      Exited              kube-proxy                0                   56d2b6ff099a0       kube-proxy-mkgv5
	6d0da8d06f797       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      17 minutes ago      Exited              kube-scheduler            0                   10902ac1c9f4f       kube-scheduler-ha-912667
	860c8d827dba6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   b27e008a10a06       etcd-ha-912667
	
	
	==> coredns [380ad5799738ff5b76de6315d59529ac9c8a67ba2e59ae5eead7ec951d80f6b7] <==
	Trace[617395045]: [10.001070639s] [10.001070639s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1985995287]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Apr-2024 19:02:21.356) (total time: 10002ms):
	Trace[1985995287]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (19:02:31.358)
	Trace[1985995287]: [10.002333169s] [10.002333169s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43328->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:43328->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36734->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[422302785]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Apr-2024 19:02:28.301) (total time: 13532ms):
	Trace[422302785]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36734->10.96.0.1:443: read: connection reset by peer 13531ms (19:02:41.833)
	Trace[422302785]: [13.532325198s] [13.532325198s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36734->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43326->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:43326->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [5b5e973107f163dcb2751f398f0fdcd1eb79a1992f734b4a47c2ec7f13015786] <==
	[INFO] 10.244.0.4:32831 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001738929s
	[INFO] 10.244.1.2:38408 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00017538s
	[INFO] 10.244.2.2:37503 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003970142s
	[INFO] 10.244.2.2:40887 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218678s
	[INFO] 10.244.0.4:49981 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001952122s
	[INFO] 10.244.0.4:56986 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183129s
	[INFO] 10.244.0.4:33316 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126163s
	[INFO] 10.244.1.2:34817 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000365634s
	[INFO] 10.244.1.2:38909 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001350261s
	[INFO] 10.244.1.2:51802 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101088s
	[INFO] 10.244.2.2:47175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020899s
	[INFO] 10.244.2.2:46654 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000319039s
	[INFO] 10.244.2.2:36020 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135369s
	[INFO] 10.244.1.2:58245 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000248988s
	[INFO] 10.244.1.2:45237 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202978s
	[INFO] 10.244.0.4:52108 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149798s
	[INFO] 10.244.0.4:52793 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093152s
	[INFO] 10.244.1.2:57128 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187429s
	[INFO] 10.244.1.2:40536 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000186246s
	[INFO] 10.244.1.2:52690 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120066s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [877510603b8289ac42f40c18ba683a1a715aa06b59fb587c7634182d44120275] <==
	[INFO] 10.244.0.4:51578 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122143s
	[INFO] 10.244.1.2:40259 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165953s
	[INFO] 10.244.1.2:39729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001829607s
	[INFO] 10.244.1.2:34733 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172404s
	[INFO] 10.244.1.2:45725 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129433s
	[INFO] 10.244.1.2:35820 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133249s
	[INFO] 10.244.2.2:40405 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00168841s
	[INFO] 10.244.0.4:40751 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000295717s
	[INFO] 10.244.0.4:35528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000102349s
	[INFO] 10.244.0.4:36374 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00035359s
	[INFO] 10.244.0.4:51732 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098091s
	[INFO] 10.244.1.2:41291 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000329271s
	[INFO] 10.244.1.2:36756 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159777s
	[INFO] 10.244.2.2:54364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000374806s
	[INFO] 10.244.2.2:35469 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0003009s
	[INFO] 10.244.2.2:57557 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000412395s
	[INFO] 10.244.2.2:55375 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188342s
	[INFO] 10.244.0.4:50283 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136579s
	[INFO] 10.244.0.4:60253 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000062518s
	[INFO] 10.244.1.2:48368 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000591883s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d8279db081c114756c0ef4369b7f2dcd81110abcda6769ff15356ef16d82899f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41672->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[443656988]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Apr-2024 19:02:28.534) (total time: 10606ms):
	Trace[443656988]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41672->10.96.0.1:443: read: connection reset by peer 10606ms (19:02:39.141)
	Trace[443656988]: [10.606796381s] [10.606796381s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:41672->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41676->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[221408255]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Apr-2024 19:02:31.362) (total time: 10470ms):
	Trace[221408255]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41676->10.96.0.1:443: read: connection reset by peer 10470ms (19:02:41.832)
	Trace[221408255]: [10.470671401s] [10.470671401s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:41676->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43400->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43400->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-912667
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T18_50_19_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:50:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:07:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:02:54 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:02:54 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:02:54 +0000   Thu, 25 Apr 2024 18:50:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:02:54 +0000   Thu, 25 Apr 2024 18:50:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    ha-912667
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3a8edadaa67460ebdc313c0c3e1c3f7
	  System UUID:                a3a8edad-aa67-460e-bdc3-13c0c3e1c3f7
	  Boot ID:                    dc005c29-5a5e-4df7-8967-c057d8b3aa0a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nxhjn              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-22wvx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-h4s2h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-912667                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-xlvjt                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-912667             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-912667    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-mkgv5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-912667             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-912667                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4m26s              kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-912667 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node ha-912667 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-912667 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                kubelet          Node ha-912667 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node ha-912667 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node ha-912667 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal   NodeReady                16m                kubelet          Node ha-912667 status is now: NodeReady
	  Normal   RegisteredNode           14m                node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Warning  ContainerGCFailed        6m5s               kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m16s              node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal   RegisteredNode           4m14s              node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	  Normal   RegisteredNode           3m15s              node-controller  Node ha-912667 event: Registered Node ha-912667 in Controller
	
	
	Name:               ha-912667-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T18_52_33_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:52:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:07:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:05:56 +0000   Thu, 25 Apr 2024 19:05:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:05:56 +0000   Thu, 25 Apr 2024 19:05:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:05:56 +0000   Thu, 25 Apr 2024 19:05:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:05:56 +0000   Thu, 25 Apr 2024 19:05:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.66
	  Hostname:    ha-912667-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82894439088e4cc98841c062c296fef3
	  System UUID:                82894439-088e-4cc9-8841-c062c296fef3
	  Boot ID:                    5efcf1bd-8cfb-462d-98a7-2cfcf6ac7d39
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tcxzk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-912667-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-sq4lb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-912667-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-912667-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-rkbcp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-912667-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-912667-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 4m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-912667-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-912667-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                    node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-912667-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-912667-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m50s (x8 over 4m50s)  kubelet          Node ha-912667-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m50s (x8 over 4m50s)  kubelet          Node ha-912667-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m50s (x7 over 4m50s)  kubelet          Node ha-912667-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-912667-m02 event: Registered Node ha-912667-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-912667-m02 status is now: NodeNotReady
	
	
	Name:               ha-912667-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-912667-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=ha-912667
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T18_54_45_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 18:54:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-912667-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:04:55 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 25 Apr 2024 19:04:34 +0000   Thu, 25 Apr 2024 19:05:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 25 Apr 2024 19:04:34 +0000   Thu, 25 Apr 2024 19:05:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 25 Apr 2024 19:04:34 +0000   Thu, 25 Apr 2024 19:05:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 25 Apr 2024 19:04:34 +0000   Thu, 25 Apr 2024 19:05:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-912667-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c6d1da6a42954aa3b31899cd270783aa
	  System UUID:                c6d1da6a-4295-4aa3-b318-99cd270783aa
	  Boot ID:                    532d016c-b414-4642-af4e-a25f0615f501
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-b9nnj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kindnet-4l974              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-64vg4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-912667-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-912667-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-912667-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-912667-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m17s                  node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-912667-m04 event: Registered Node ha-912667-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m50s (x2 over 2m50s)  kubelet          Node ha-912667-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m50s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m50s (x2 over 2m50s)  kubelet          Node ha-912667-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m50s (x2 over 2m50s)  kubelet          Node ha-912667-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m50s                  kubelet          Node ha-912667-m04 has been rebooted, boot id: 532d016c-b414-4642-af4e-a25f0615f501
	  Normal   NodeReady                2m50s                  kubelet          Node ha-912667-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 3m37s)   node-controller  Node ha-912667-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Apr25 18:50] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.058108] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076447] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.197185] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.122034] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.313908] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.923241] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.067466] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.659823] systemd-fstab-generator[961]: Ignoring "noauto" option for root device
	[  +0.462418] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.581179] systemd-fstab-generator[1379]: Ignoring "noauto" option for root device
	[  +0.076665] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.874397] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.005828] kauditd_printk_skb: 74 callbacks suppressed
	[Apr25 18:59] kauditd_printk_skb: 1 callbacks suppressed
	[Apr25 19:02] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
	[  +0.155948] systemd-fstab-generator[3867]: Ignoring "noauto" option for root device
	[  +0.182921] systemd-fstab-generator[3881]: Ignoring "noauto" option for root device
	[  +0.158504] systemd-fstab-generator[3893]: Ignoring "noauto" option for root device
	[  +0.304221] systemd-fstab-generator[3921]: Ignoring "noauto" option for root device
	[  +1.482834] systemd-fstab-generator[4024]: Ignoring "noauto" option for root device
	[  +5.386683] kauditd_printk_skb: 122 callbacks suppressed
	[ +13.225322] kauditd_printk_skb: 86 callbacks suppressed
	[  +9.059996] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.857180] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [74e670ab4471745be8eecfacef997853b6afd5e8508a46b249cc8831adbbaf33] <==
	{"level":"info","ts":"2024-04-25T19:03:51.405957Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:03:51.405984Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:03:51.406519Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6fb28b9aae66857a","to":"2f342753978b2ebf","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-25T19:03:51.406604Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:03:51.422262Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:03:51.423991Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"warn","ts":"2024-04-25T19:03:52.632795Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2f342753978b2ebf","rtt":"0s","error":"dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-25T19:03:52.632975Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2f342753978b2ebf","rtt":"0s","error":"dial tcp 192.168.39.179:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-25T19:04:49.332625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a switched to configuration voters=(4639795839039494326 8048648980531676538)"}
	{"level":"info","ts":"2024-04-25T19:04:49.335456Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","removed-remote-peer-id":"2f342753978b2ebf","removed-remote-peer-urls":["https://192.168.39.179:2380"]}
	{"level":"info","ts":"2024-04-25T19:04:49.335674Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2f342753978b2ebf"}
	{"level":"warn","ts":"2024-04-25T19:04:49.336355Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:04:49.336446Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2f342753978b2ebf"}
	{"level":"warn","ts":"2024-04-25T19:04:49.337211Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:04:49.337271Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:04:49.337614Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"warn","ts":"2024-04-25T19:04:49.338205Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf","error":"context canceled"}
	{"level":"warn","ts":"2024-04-25T19:04:49.338294Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2f342753978b2ebf","error":"failed to read 2f342753978b2ebf on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-25T19:04:49.33833Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"warn","ts":"2024-04-25T19:04:49.338633Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf","error":"context canceled"}
	{"level":"info","ts":"2024-04-25T19:04:49.338658Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:04:49.338674Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:04:49.338769Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6fb28b9aae66857a","removed-remote-peer-id":"2f342753978b2ebf"}
	{"level":"warn","ts":"2024-04-25T19:04:49.364324Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"6fb28b9aae66857a","remote-peer-id-stream-handler":"6fb28b9aae66857a","remote-peer-id-from":"2f342753978b2ebf"}
	{"level":"warn","ts":"2024-04-25T19:04:49.366111Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.179:44444","server-name":"","error":"EOF"}
	
	
	==> etcd [860c8d827dba689aefe876a0012be74b5ba769c1af313b1e7ff3b1e6879f398f] <==
	2024/04/25 19:00:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/25 19:00:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/25 19:00:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/25 19:00:36 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-25T19:00:36.809371Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9618157281405767419,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-04-25T19:00:37.015553Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-25T19:00:37.015622Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-25T19:00:37.015758Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6fb28b9aae66857a","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-25T19:00:37.016008Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016152Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016311Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016604Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.01674Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016844Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6fb28b9aae66857a","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016964Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4063ddbba048d8b6"}
	{"level":"info","ts":"2024-04-25T19:00:37.016995Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017101Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017243Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017482Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017583Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017797Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6fb28b9aae66857a","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.017874Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2f342753978b2ebf"}
	{"level":"info","ts":"2024-04-25T19:00:37.021154Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2024-04-25T19:00:37.021482Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2024-04-25T19:00:37.021583Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-912667","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	
	
	==> kernel <==
	 19:07:24 up 17 min,  0 users,  load average: 0.49, 0.48, 0.37
	Linux ha-912667 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef] <==
	I0425 19:02:17.256753       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0425 19:02:27.563414       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0425 19:02:29.544233       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0425 19:02:41.832388       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.66:48974->10.96.0.1:443: read: connection reset by peer
	I0425 19:02:43.833220       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0425 19:02:46.834947       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [664d121edb6b713211c13c0cedfbd4e6ff816158d01902cca3a3dc628d413f71] <==
	I0425 19:06:39.253296       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 19:06:49.271007       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 19:06:49.271053       1 main.go:227] handling current node
	I0425 19:06:49.271065       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 19:06:49.271071       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 19:06:49.271216       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 19:06:49.271246       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 19:06:59.287124       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 19:06:59.287218       1 main.go:227] handling current node
	I0425 19:06:59.287239       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 19:06:59.287249       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 19:06:59.287494       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 19:06:59.287540       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 19:07:09.303115       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 19:07:09.303169       1 main.go:227] handling current node
	I0425 19:07:09.303180       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 19:07:09.303186       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 19:07:09.303308       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 19:07:09.303313       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	I0425 19:07:19.311489       1 main.go:223] Handling node with IPs: map[192.168.39.189:{}]
	I0425 19:07:19.311542       1 main.go:227] handling current node
	I0425 19:07:19.311554       1 main.go:223] Handling node with IPs: map[192.168.39.66:{}]
	I0425 19:07:19.311561       1 main.go:250] Node ha-912667-m02 has CIDR [10.244.1.0/24] 
	I0425 19:07:19.311662       1 main.go:223] Handling node with IPs: map[192.168.39.232:{}]
	I0425 19:07:19.311667       1 main.go:250] Node ha-912667-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [62b5eacd47457075997143150b5a47f1e32bc6ab5272420955b83158111ce6a3] <==
	I0425 19:02:17.135384       1 options.go:221] external host was not specified, using 192.168.39.189
	I0425 19:02:17.141927       1 server.go:148] Version: v1.30.0
	I0425 19:02:17.141977       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:02:18.130122       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0425 19:02:18.134094       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0425 19:02:18.134144       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0425 19:02:18.134306       1 instance.go:299] Using reconciler: lease
	I0425 19:02:18.134813       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0425 19:02:38.125080       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0425 19:02:38.131211       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0425 19:02:38.135673       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0425 19:02:38.135679       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [be1257292410198065593c6f0f876643b3d20d2dd3e8011891b55d35e4758d63] <==
	I0425 19:02:50.399606       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0425 19:02:50.401861       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0425 19:02:50.481048       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0425 19:02:50.481113       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0425 19:02:50.482004       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0425 19:02:50.482075       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0425 19:02:50.485191       1 shared_informer.go:320] Caches are synced for configmaps
	I0425 19:02:50.486218       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0425 19:02:50.490245       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0425 19:02:50.490325       1 aggregator.go:165] initial CRD sync complete...
	I0425 19:02:50.490354       1 autoregister_controller.go:141] Starting autoregister controller
	I0425 19:02:50.490395       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0425 19:02:50.490405       1 cache.go:39] Caches are synced for autoregister controller
	I0425 19:02:50.492523       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0425 19:02:50.527371       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.179]
	I0425 19:02:50.528944       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0425 19:02:50.530308       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0425 19:02:50.530329       1 policy_source.go:224] refreshing policies
	I0425 19:02:50.582040       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0425 19:02:50.629671       1 controller.go:615] quota admission added evaluator for: endpoints
	I0425 19:02:50.655439       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0425 19:02:50.672625       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0425 19:02:51.387496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0425 19:02:52.280675       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.179 192.168.39.189]
	W0425 19:03:12.105849       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.189 192.168.39.66]
	
	
	==> kube-controller-manager [35af403e5f5b77c282e2ab8be29c6a089e75d1f1c8a54fd06c0799c3de43e0d1] <==
	I0425 19:04:46.117482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.852µs"
	I0425 19:04:48.063333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.829µs"
	I0425 19:04:48.911225       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.562µs"
	I0425 19:04:48.943468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.789µs"
	I0425 19:04:48.954252       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.902µs"
	I0425 19:04:49.490466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.397275ms"
	I0425 19:04:49.490582       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.046µs"
	I0425 19:05:01.033777       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-912667-m04"
	E0425 19:05:09.402203       1 gc_controller.go:153] "Failed to get node" err="node \"ha-912667-m03\" not found" logger="pod-garbage-collector-controller" node="ha-912667-m03"
	E0425 19:05:09.402350       1 gc_controller.go:153] "Failed to get node" err="node \"ha-912667-m03\" not found" logger="pod-garbage-collector-controller" node="ha-912667-m03"
	E0425 19:05:09.402392       1 gc_controller.go:153] "Failed to get node" err="node \"ha-912667-m03\" not found" logger="pod-garbage-collector-controller" node="ha-912667-m03"
	E0425 19:05:09.402423       1 gc_controller.go:153] "Failed to get node" err="node \"ha-912667-m03\" not found" logger="pod-garbage-collector-controller" node="ha-912667-m03"
	E0425 19:05:09.402454       1 gc_controller.go:153] "Failed to get node" err="node \"ha-912667-m03\" not found" logger="pod-garbage-collector-controller" node="ha-912667-m03"
	E0425 19:05:29.403284       1 gc_controller.go:153] "Failed to get node" err="node \"ha-912667-m03\" not found" logger="pod-garbage-collector-controller" node="ha-912667-m03"
	E0425 19:05:29.403348       1 gc_controller.go:153] "Failed to get node" err="node \"ha-912667-m03\" not found" logger="pod-garbage-collector-controller" node="ha-912667-m03"
	E0425 19:05:29.403361       1 gc_controller.go:153] "Failed to get node" err="node \"ha-912667-m03\" not found" logger="pod-garbage-collector-controller" node="ha-912667-m03"
	E0425 19:05:29.403368       1 gc_controller.go:153] "Failed to get node" err="node \"ha-912667-m03\" not found" logger="pod-garbage-collector-controller" node="ha-912667-m03"
	E0425 19:05:29.403376       1 gc_controller.go:153] "Failed to get node" err="node \"ha-912667-m03\" not found" logger="pod-garbage-collector-controller" node="ha-912667-m03"
	I0425 19:05:34.532021       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-912667-m04"
	I0425 19:05:34.718002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.974547ms"
	I0425 19:05:34.718389       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.47µs"
	I0425 19:05:37.446251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.796907ms"
	I0425 19:05:37.446414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.13µs"
	I0425 19:05:53.892347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.799104ms"
	I0425 19:05:53.892479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.469µs"
	
	
	==> kube-controller-manager [3997d681dd3c6abf6fecf3119895f445d9d960e69cc4d6b33b77f4313810dda6] <==
	I0425 19:02:18.301464       1 serving.go:380] Generated self-signed cert in-memory
	I0425 19:02:18.625815       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0425 19:02:18.625868       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:02:18.627832       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0425 19:02:18.628112       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0425 19:02:18.628514       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0425 19:02:18.629347       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0425 19:02:39.142611       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.189:8443/healthz\": dial tcp 192.168.39.189:8443: connect: connection refused"
	
	
	==> kube-proxy [35f0443a12a2fd7b69263c5179cf7e12b621597ce02c87c3158e1aa448335386] <==
	E0425 18:59:24.332835       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:27.401786       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:27.401880       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:27.401946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:27.401787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:27.401972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:27.401909       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:33.547076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:33.547159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:33.547221       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:33.547249       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:36.617847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:36.617973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:42.761607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:42.761743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:45.832905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:45.832984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 18:59:45.833119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 18:59:45.833141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 19:00:01.192303       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 19:00:01.192447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2040": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 19:00:01.192670       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 19:00:01.192796       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-912667&resourceVersion=2026": dial tcp 192.168.39.254:8443: connect: no route to host
	W0425 19:00:07.339111       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	E0425 19:00:07.339368       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2044": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [9a710c78ee141c7c5c9eb1a047b80fdb89959cf74148c464b8565c4350725fea] <==
	I0425 19:02:18.320401       1 server_linux.go:69] "Using iptables proxy"
	E0425 19:02:19.433074       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-912667\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0425 19:02:22.505405       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-912667\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0425 19:02:25.576158       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-912667\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0425 19:02:31.722379       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-912667\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0425 19:02:40.937422       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-912667\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0425 19:02:57.501486       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.189"]
	I0425 19:02:57.555078       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:02:57.555159       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:02:57.555181       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:02:57.558439       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:02:57.558820       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:02:57.558864       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:02:57.560336       1 config.go:192] "Starting service config controller"
	I0425 19:02:57.560455       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:02:57.560486       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:02:57.560490       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:02:57.561413       1 config.go:319] "Starting node config controller"
	I0425 19:02:57.561448       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:02:57.660627       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:02:57.660671       1 shared_informer.go:320] Caches are synced for service config
	I0425 19:02:57.662198       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5f799a7e1725b4f3b7c0a031b6fada2efc97f1662c8c5d5759c4beedb20b3807] <==
	W0425 19:02:47.154408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.189:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.154485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.189:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:47.205805       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.189:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.205944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.189:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:47.289033       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.189:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.289201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.189:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:47.398187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.189:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.398302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.189:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:47.418066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.189:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.418156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.189:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:47.973204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.189:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:47.973339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.189:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:48.315135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.189:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	E0425 19:02:48.315227       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.189:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.189:8443: connect: connection refused
	W0425 19:02:50.408241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 19:02:50.408302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 19:02:50.408482       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 19:02:50.408522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0425 19:02:50.408593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0425 19:02:50.408632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0425 19:02:50.408680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 19:02:50.408773       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 19:02:50.412877       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 19:02:50.413044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0425 19:02:53.147673       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6d0da8d06f797fa86b18213bb11088b5e792b69eeb78172e80b088e08cab14a5] <==
	W0425 19:00:33.125154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 19:00:33.125323       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 19:00:33.243120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0425 19:00:33.243228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0425 19:00:33.264627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 19:00:33.264832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0425 19:00:33.351271       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0425 19:00:33.351370       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0425 19:00:33.420989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 19:00:33.421116       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 19:00:33.471445       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 19:00:33.471565       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0425 19:00:33.762935       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 19:00:33.763068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 19:00:34.283302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 19:00:34.283431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 19:00:34.542249       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 19:00:34.542378       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0425 19:00:34.602073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0425 19:00:34.602253       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0425 19:00:34.660855       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 19:00:34.660966       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 19:00:35.019260       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:00:35.019330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:00:36.706567       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 25 19:03:23 ha-912667 kubelet[1386]: I0425 19:03:23.880676    1386 scope.go:117] "RemoveContainer" containerID="15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef"
	Apr 25 19:03:23 ha-912667 kubelet[1386]: E0425 19:03:23.881292    1386 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-xlvjt_kube-system(191ff28e-07d7-459e-afe5-e3d8c23e1016)\"" pod="kube-system/kindnet-xlvjt" podUID="191ff28e-07d7-459e-afe5-e3d8c23e1016"
	Apr 25 19:03:37 ha-912667 kubelet[1386]: I0425 19:03:37.879992    1386 scope.go:117] "RemoveContainer" containerID="15d248c866f4896c594e6d29c10d5e0ca088d6c63c30d307c5a4c4ee1dc2c3ef"
	Apr 25 19:03:57 ha-912667 kubelet[1386]: I0425 19:03:57.879915    1386 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-912667" podUID="bd3267a7-206d-4e47-b154-a7f17a492684"
	Apr 25 19:03:57 ha-912667 kubelet[1386]: I0425 19:03:57.901622    1386 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-912667"
	Apr 25 19:04:18 ha-912667 kubelet[1386]: E0425 19:04:18.915613    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:04:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:04:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:04:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:04:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 19:05:18 ha-912667 kubelet[1386]: E0425 19:05:18.913508    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:05:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:05:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:05:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:05:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 19:06:18 ha-912667 kubelet[1386]: E0425 19:06:18.915901    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:06:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:06:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:06:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:06:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 19:07:18 ha-912667 kubelet[1386]: E0425 19:07:18.913237    1386 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:07:18 ha-912667 kubelet[1386]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:07:18 ha-912667 kubelet[1386]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:07:18 ha-912667 kubelet[1386]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:07:18 ha-912667 kubelet[1386]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:07:22.925293   33074 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18757-6355/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-912667 -n ha-912667
helpers_test.go:261: (dbg) Run:  kubectl --context ha-912667 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (314.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-857482
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-857482
E0425 19:23:36.328561   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-857482: exit status 82 (2m2.708056534s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-857482-m03"  ...
	* Stopping node "multinode-857482-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-857482" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-857482 --wait=true -v=8 --alsologtostderr
E0425 19:25:45.438954   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 19:26:39.378000   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-857482 --wait=true -v=8 --alsologtostderr: (3m9.179467585s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-857482
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-857482 -n multinode-857482
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-857482 logs -n 25: (1.631831128s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp multinode-857482-m02:/home/docker/cp-test.txt                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile932174876/001/cp-test_multinode-857482-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp multinode-857482-m02:/home/docker/cp-test.txt                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482:/home/docker/cp-test_multinode-857482-m02_multinode-857482.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n multinode-857482 sudo cat                                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-857482-m02_multinode-857482.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp multinode-857482-m02:/home/docker/cp-test.txt                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03:/home/docker/cp-test_multinode-857482-m02_multinode-857482-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n multinode-857482-m03 sudo cat                                   | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-857482-m02_multinode-857482-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp testdata/cp-test.txt                                                | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp multinode-857482-m03:/home/docker/cp-test.txt                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile932174876/001/cp-test_multinode-857482-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp multinode-857482-m03:/home/docker/cp-test.txt                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482:/home/docker/cp-test_multinode-857482-m03_multinode-857482.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n multinode-857482 sudo cat                                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-857482-m03_multinode-857482.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp multinode-857482-m03:/home/docker/cp-test.txt                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m02:/home/docker/cp-test_multinode-857482-m03_multinode-857482-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n multinode-857482-m02 sudo cat                                   | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-857482-m03_multinode-857482-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-857482 node stop m03                                                          | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	| node    | multinode-857482 node start                                                             | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-857482                                                                | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:23 UTC |                     |
	| stop    | -p multinode-857482                                                                     | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:23 UTC |                     |
	| start   | -p multinode-857482                                                                     | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:25 UTC | 25 Apr 24 19:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-857482                                                                | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:25:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:25:18.314680   43102 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:25:18.314779   43102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:25:18.314790   43102 out.go:304] Setting ErrFile to fd 2...
	I0425 19:25:18.314794   43102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:25:18.314989   43102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:25:18.315535   43102 out.go:298] Setting JSON to false
	I0425 19:25:18.316463   43102 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4064,"bootTime":1714069054,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:25:18.316521   43102 start.go:139] virtualization: kvm guest
	I0425 19:25:18.319152   43102 out.go:177] * [multinode-857482] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:25:18.320813   43102 notify.go:220] Checking for updates...
	I0425 19:25:18.320825   43102 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:25:18.322184   43102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:25:18.323633   43102 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:25:18.324898   43102 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:25:18.326091   43102 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:25:18.327311   43102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:25:18.328829   43102 config.go:182] Loaded profile config "multinode-857482": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:25:18.328939   43102 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:25:18.329348   43102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:25:18.329395   43102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:25:18.345099   43102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0425 19:25:18.345486   43102 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:25:18.345978   43102 main.go:141] libmachine: Using API Version  1
	I0425 19:25:18.345999   43102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:25:18.346323   43102 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:25:18.346523   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:25:18.380864   43102 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:25:18.382109   43102 start.go:297] selected driver: kvm2
	I0425 19:25:18.382123   43102 start.go:901] validating driver "kvm2" against &{Name:multinode-857482 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-857482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:25:18.382310   43102 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:25:18.382638   43102 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:25:18.382710   43102 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:25:18.397462   43102 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:25:18.398199   43102 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:25:18.398289   43102 cni.go:84] Creating CNI manager for ""
	I0425 19:25:18.398303   43102 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0425 19:25:18.398368   43102 start.go:340] cluster config:
	{Name:multinode-857482 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-857482 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:25:18.398472   43102 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:25:18.400396   43102 out.go:177] * Starting "multinode-857482" primary control-plane node in "multinode-857482" cluster
	I0425 19:25:18.401794   43102 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:25:18.401833   43102 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:25:18.401843   43102 cache.go:56] Caching tarball of preloaded images
	I0425 19:25:18.401910   43102 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:25:18.401920   43102 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 19:25:18.402079   43102 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/config.json ...
	I0425 19:25:18.402318   43102 start.go:360] acquireMachinesLock for multinode-857482: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:25:18.402361   43102 start.go:364] duration metric: took 23.728µs to acquireMachinesLock for "multinode-857482"
	I0425 19:25:18.402375   43102 start.go:96] Skipping create...Using existing machine configuration
	I0425 19:25:18.402382   43102 fix.go:54] fixHost starting: 
	I0425 19:25:18.402642   43102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:25:18.402676   43102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:25:18.415971   43102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I0425 19:25:18.416347   43102 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:25:18.416812   43102 main.go:141] libmachine: Using API Version  1
	I0425 19:25:18.416835   43102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:25:18.417082   43102 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:25:18.417250   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:25:18.417360   43102 main.go:141] libmachine: (multinode-857482) Calling .GetState
	I0425 19:25:18.418810   43102 fix.go:112] recreateIfNeeded on multinode-857482: state=Running err=<nil>
	W0425 19:25:18.418850   43102 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 19:25:18.420624   43102 out.go:177] * Updating the running kvm2 "multinode-857482" VM ...
	I0425 19:25:18.421948   43102 machine.go:94] provisionDockerMachine start ...
	I0425 19:25:18.421971   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:25:18.422165   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:18.424634   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.425005   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:18.425029   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.425189   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:25:18.425337   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.425510   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.425645   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:25:18.425808   43102 main.go:141] libmachine: Using SSH client type: native
	I0425 19:25:18.426068   43102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0425 19:25:18.426096   43102 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 19:25:18.552103   43102 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-857482
	
	I0425 19:25:18.552142   43102 main.go:141] libmachine: (multinode-857482) Calling .GetMachineName
	I0425 19:25:18.552398   43102 buildroot.go:166] provisioning hostname "multinode-857482"
	I0425 19:25:18.552432   43102 main.go:141] libmachine: (multinode-857482) Calling .GetMachineName
	I0425 19:25:18.552623   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:18.555132   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.555516   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:18.555536   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.555697   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:25:18.555870   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.556026   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.556141   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:25:18.556299   43102 main.go:141] libmachine: Using SSH client type: native
	I0425 19:25:18.556504   43102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0425 19:25:18.556520   43102 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-857482 && echo "multinode-857482" | sudo tee /etc/hostname
	I0425 19:25:18.692590   43102 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-857482
	
	I0425 19:25:18.692632   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:18.695505   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.695870   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:18.695899   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.696079   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:25:18.696293   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.696508   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.696655   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:25:18.696801   43102 main.go:141] libmachine: Using SSH client type: native
	I0425 19:25:18.696980   43102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0425 19:25:18.697002   43102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-857482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-857482/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-857482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 19:25:18.811904   43102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:25:18.811936   43102 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 19:25:18.811955   43102 buildroot.go:174] setting up certificates
	I0425 19:25:18.811966   43102 provision.go:84] configureAuth start
	I0425 19:25:18.811975   43102 main.go:141] libmachine: (multinode-857482) Calling .GetMachineName
	I0425 19:25:18.812247   43102 main.go:141] libmachine: (multinode-857482) Calling .GetIP
	I0425 19:25:18.814884   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.815213   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:18.815236   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.815406   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:18.817570   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.817929   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:18.817952   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.818071   43102 provision.go:143] copyHostCerts
	I0425 19:25:18.818092   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:25:18.818115   43102 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 19:25:18.818124   43102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:25:18.818185   43102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 19:25:18.818314   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:25:18.818342   43102 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 19:25:18.818352   43102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:25:18.818392   43102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 19:25:18.818455   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:25:18.818474   43102 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 19:25:18.818480   43102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:25:18.818503   43102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 19:25:18.818560   43102 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.multinode-857482 san=[127.0.0.1 192.168.39.194 localhost minikube multinode-857482]
	I0425 19:25:19.031008   43102 provision.go:177] copyRemoteCerts
	I0425 19:25:19.031060   43102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 19:25:19.031086   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:19.033802   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:19.034146   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:19.034168   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:19.034402   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:25:19.034607   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:19.034717   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:25:19.034878   43102 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/multinode-857482/id_rsa Username:docker}
	I0425 19:25:19.123261   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 19:25:19.123324   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 19:25:19.152747   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 19:25:19.152818   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0425 19:25:19.183155   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 19:25:19.183238   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 19:25:19.209928   43102 provision.go:87] duration metric: took 397.949061ms to configureAuth
	I0425 19:25:19.209959   43102 buildroot.go:189] setting minikube options for container-runtime
	I0425 19:25:19.210185   43102 config.go:182] Loaded profile config "multinode-857482": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:25:19.210282   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:19.213095   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:19.213576   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:19.213612   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:19.213740   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:25:19.213949   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:19.214113   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:19.214255   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:25:19.214435   43102 main.go:141] libmachine: Using SSH client type: native
	I0425 19:25:19.214625   43102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0425 19:25:19.214647   43102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 19:26:49.947661   43102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 19:26:49.947688   43102 machine.go:97] duration metric: took 1m31.525724233s to provisionDockerMachine
	I0425 19:26:49.947700   43102 start.go:293] postStartSetup for "multinode-857482" (driver="kvm2")
	I0425 19:26:49.947710   43102 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 19:26:49.947727   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:26:49.948056   43102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 19:26:49.948099   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:26:49.950895   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:49.951232   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:26:49.951256   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:49.951418   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:26:49.951605   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:26:49.951759   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:26:49.951880   43102 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/multinode-857482/id_rsa Username:docker}
	I0425 19:26:50.043042   43102 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 19:26:50.048344   43102 command_runner.go:130] > NAME=Buildroot
	I0425 19:26:50.048363   43102 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0425 19:26:50.048367   43102 command_runner.go:130] > ID=buildroot
	I0425 19:26:50.048380   43102 command_runner.go:130] > VERSION_ID=2023.02.9
	I0425 19:26:50.048388   43102 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0425 19:26:50.048422   43102 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 19:26:50.048439   43102 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 19:26:50.048509   43102 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 19:26:50.048581   43102 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 19:26:50.048590   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 19:26:50.048664   43102 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 19:26:50.059300   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:26:50.087316   43102 start.go:296] duration metric: took 139.604894ms for postStartSetup
	I0425 19:26:50.087357   43102 fix.go:56] duration metric: took 1m31.684974618s for fixHost
	I0425 19:26:50.087375   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:26:50.090036   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.090399   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:26:50.090434   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.090553   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:26:50.090764   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:26:50.090884   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:26:50.091006   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:26:50.091166   43102 main.go:141] libmachine: Using SSH client type: native
	I0425 19:26:50.091366   43102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0425 19:26:50.091379   43102 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 19:26:50.208129   43102 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714073210.188297828
	
	I0425 19:26:50.208155   43102 fix.go:216] guest clock: 1714073210.188297828
	I0425 19:26:50.208163   43102 fix.go:229] Guest: 2024-04-25 19:26:50.188297828 +0000 UTC Remote: 2024-04-25 19:26:50.087360479 +0000 UTC m=+91.819923739 (delta=100.937349ms)
	I0425 19:26:50.208181   43102 fix.go:200] guest clock delta is within tolerance: 100.937349ms
	I0425 19:26:50.208186   43102 start.go:83] releasing machines lock for "multinode-857482", held for 1m31.805817152s
	I0425 19:26:50.208201   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:26:50.208479   43102 main.go:141] libmachine: (multinode-857482) Calling .GetIP
	I0425 19:26:50.211118   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.211534   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:26:50.211554   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.211705   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:26:50.212409   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:26:50.212584   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:26:50.212684   43102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 19:26:50.212718   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:26:50.212780   43102 ssh_runner.go:195] Run: cat /version.json
	I0425 19:26:50.212816   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:26:50.215261   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.215505   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.215638   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:26:50.215665   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.215791   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:26:50.215927   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:26:50.215945   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.215960   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:26:50.216110   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:26:50.216111   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:26:50.216276   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:26:50.216275   43102 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/multinode-857482/id_rsa Username:docker}
	I0425 19:26:50.216408   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:26:50.216528   43102 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/multinode-857482/id_rsa Username:docker}
	I0425 19:26:50.322587   43102 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0425 19:26:50.322645   43102 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0425 19:26:50.322802   43102 ssh_runner.go:195] Run: systemctl --version
	I0425 19:26:50.328947   43102 command_runner.go:130] > systemd 252 (252)
	I0425 19:26:50.328979   43102 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0425 19:26:50.329223   43102 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 19:26:50.489339   43102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0425 19:26:50.498255   43102 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0425 19:26:50.498671   43102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 19:26:50.498735   43102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 19:26:50.511299   43102 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0425 19:26:50.511396   43102 start.go:494] detecting cgroup driver to use...
	I0425 19:26:50.511461   43102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 19:26:50.529730   43102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 19:26:50.544068   43102 docker.go:217] disabling cri-docker service (if available) ...
	I0425 19:26:50.544129   43102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 19:26:50.558754   43102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 19:26:50.573886   43102 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 19:26:50.728966   43102 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 19:26:50.880723   43102 docker.go:233] disabling docker service ...
	I0425 19:26:50.880802   43102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 19:26:50.905733   43102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 19:26:50.924979   43102 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 19:26:51.079638   43102 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 19:26:51.237279   43102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 19:26:51.252454   43102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 19:26:51.273572   43102 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0425 19:26:51.273949   43102 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 19:26:51.274007   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.285838   43102 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 19:26:51.285912   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.297897   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.309188   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.320337   43102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 19:26:51.332216   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.343879   43102 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.356612   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.369265   43102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 19:26:51.379917   43102 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0425 19:26:51.379999   43102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 19:26:51.390135   43102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:26:51.541085   43102 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 19:27:01.174870   43102 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.633748192s)
	I0425 19:27:01.174911   43102 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 19:27:01.174963   43102 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 19:27:01.180662   43102 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0425 19:27:01.180688   43102 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0425 19:27:01.180694   43102 command_runner.go:130] > Device: 0,22	Inode: 1318        Links: 1
	I0425 19:27:01.180701   43102 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0425 19:27:01.180706   43102 command_runner.go:130] > Access: 2024-04-25 19:27:01.031436493 +0000
	I0425 19:27:01.180712   43102 command_runner.go:130] > Modify: 2024-04-25 19:27:01.031436493 +0000
	I0425 19:27:01.180718   43102 command_runner.go:130] > Change: 2024-04-25 19:27:01.031436493 +0000
	I0425 19:27:01.180722   43102 command_runner.go:130] >  Birth: -
	I0425 19:27:01.180737   43102 start.go:562] Will wait 60s for crictl version
	I0425 19:27:01.180789   43102 ssh_runner.go:195] Run: which crictl
	I0425 19:27:01.184880   43102 command_runner.go:130] > /usr/bin/crictl
	I0425 19:27:01.185061   43102 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 19:27:01.224955   43102 command_runner.go:130] > Version:  0.1.0
	I0425 19:27:01.224982   43102 command_runner.go:130] > RuntimeName:  cri-o
	I0425 19:27:01.224990   43102 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0425 19:27:01.224999   43102 command_runner.go:130] > RuntimeApiVersion:  v1
	I0425 19:27:01.226094   43102 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 19:27:01.226174   43102 ssh_runner.go:195] Run: crio --version
	I0425 19:27:01.257617   43102 command_runner.go:130] > crio version 1.29.1
	I0425 19:27:01.257643   43102 command_runner.go:130] > Version:        1.29.1
	I0425 19:27:01.257651   43102 command_runner.go:130] > GitCommit:      unknown
	I0425 19:27:01.257658   43102 command_runner.go:130] > GitCommitDate:  unknown
	I0425 19:27:01.257664   43102 command_runner.go:130] > GitTreeState:   clean
	I0425 19:27:01.257673   43102 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0425 19:27:01.257680   43102 command_runner.go:130] > GoVersion:      go1.21.6
	I0425 19:27:01.257687   43102 command_runner.go:130] > Compiler:       gc
	I0425 19:27:01.257703   43102 command_runner.go:130] > Platform:       linux/amd64
	I0425 19:27:01.257717   43102 command_runner.go:130] > Linkmode:       dynamic
	I0425 19:27:01.257739   43102 command_runner.go:130] > BuildTags:      
	I0425 19:27:01.257750   43102 command_runner.go:130] >   containers_image_ostree_stub
	I0425 19:27:01.257757   43102 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0425 19:27:01.257767   43102 command_runner.go:130] >   btrfs_noversion
	I0425 19:27:01.257774   43102 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0425 19:27:01.257783   43102 command_runner.go:130] >   libdm_no_deferred_remove
	I0425 19:27:01.257788   43102 command_runner.go:130] >   seccomp
	I0425 19:27:01.257795   43102 command_runner.go:130] > LDFlags:          unknown
	I0425 19:27:01.257805   43102 command_runner.go:130] > SeccompEnabled:   true
	I0425 19:27:01.257811   43102 command_runner.go:130] > AppArmorEnabled:  false
	I0425 19:27:01.259111   43102 ssh_runner.go:195] Run: crio --version
	I0425 19:27:01.290646   43102 command_runner.go:130] > crio version 1.29.1
	I0425 19:27:01.290668   43102 command_runner.go:130] > Version:        1.29.1
	I0425 19:27:01.290674   43102 command_runner.go:130] > GitCommit:      unknown
	I0425 19:27:01.290678   43102 command_runner.go:130] > GitCommitDate:  unknown
	I0425 19:27:01.290683   43102 command_runner.go:130] > GitTreeState:   clean
	I0425 19:27:01.290688   43102 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0425 19:27:01.290692   43102 command_runner.go:130] > GoVersion:      go1.21.6
	I0425 19:27:01.290696   43102 command_runner.go:130] > Compiler:       gc
	I0425 19:27:01.290700   43102 command_runner.go:130] > Platform:       linux/amd64
	I0425 19:27:01.290704   43102 command_runner.go:130] > Linkmode:       dynamic
	I0425 19:27:01.290710   43102 command_runner.go:130] > BuildTags:      
	I0425 19:27:01.290714   43102 command_runner.go:130] >   containers_image_ostree_stub
	I0425 19:27:01.290718   43102 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0425 19:27:01.290722   43102 command_runner.go:130] >   btrfs_noversion
	I0425 19:27:01.290730   43102 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0425 19:27:01.290738   43102 command_runner.go:130] >   libdm_no_deferred_remove
	I0425 19:27:01.290750   43102 command_runner.go:130] >   seccomp
	I0425 19:27:01.290754   43102 command_runner.go:130] > LDFlags:          unknown
	I0425 19:27:01.290758   43102 command_runner.go:130] > SeccompEnabled:   true
	I0425 19:27:01.290762   43102 command_runner.go:130] > AppArmorEnabled:  false
	I0425 19:27:01.294361   43102 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 19:27:01.295728   43102 main.go:141] libmachine: (multinode-857482) Calling .GetIP
	I0425 19:27:01.298276   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:27:01.298694   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:27:01.298724   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:27:01.298908   43102 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 19:27:01.303545   43102 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0425 19:27:01.303754   43102 kubeadm.go:877] updating cluster {Name:multinode-857482 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-857482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 19:27:01.303907   43102 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:27:01.303960   43102 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:27:01.354297   43102 command_runner.go:130] > {
	I0425 19:27:01.354327   43102 command_runner.go:130] >   "images": [
	I0425 19:27:01.354334   43102 command_runner.go:130] >     {
	I0425 19:27:01.354345   43102 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0425 19:27:01.354351   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.354356   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0425 19:27:01.354360   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354364   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.354373   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0425 19:27:01.354380   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0425 19:27:01.354383   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354390   43102 command_runner.go:130] >       "size": "65291810",
	I0425 19:27:01.354400   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.354407   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.354420   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.354430   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.354436   43102 command_runner.go:130] >     },
	I0425 19:27:01.354444   43102 command_runner.go:130] >     {
	I0425 19:27:01.354454   43102 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0425 19:27:01.354458   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.354472   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0425 19:27:01.354481   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354488   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.354500   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0425 19:27:01.354515   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0425 19:27:01.354522   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354528   43102 command_runner.go:130] >       "size": "1363676",
	I0425 19:27:01.354536   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.354546   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.354554   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.354558   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.354564   43102 command_runner.go:130] >     },
	I0425 19:27:01.354572   43102 command_runner.go:130] >     {
	I0425 19:27:01.354582   43102 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0425 19:27:01.354591   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.354601   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0425 19:27:01.354610   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354617   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.354632   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0425 19:27:01.354644   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0425 19:27:01.354652   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354659   43102 command_runner.go:130] >       "size": "31470524",
	I0425 19:27:01.354668   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.354675   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.354684   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.354691   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.354702   43102 command_runner.go:130] >     },
	I0425 19:27:01.354711   43102 command_runner.go:130] >     {
	I0425 19:27:01.354720   43102 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0425 19:27:01.354728   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.354733   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0425 19:27:01.354748   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354756   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.354771   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0425 19:27:01.354795   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0425 19:27:01.354804   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354814   43102 command_runner.go:130] >       "size": "61245718",
	I0425 19:27:01.354823   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.354837   43102 command_runner.go:130] >       "username": "nonroot",
	I0425 19:27:01.354847   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.354853   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.354861   43102 command_runner.go:130] >     },
	I0425 19:27:01.354867   43102 command_runner.go:130] >     {
	I0425 19:27:01.354877   43102 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0425 19:27:01.354887   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.354894   43102 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0425 19:27:01.354901   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354905   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.354919   43102 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0425 19:27:01.354934   43102 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0425 19:27:01.354942   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354950   43102 command_runner.go:130] >       "size": "150779692",
	I0425 19:27:01.354959   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.354965   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.354974   43102 command_runner.go:130] >       },
	I0425 19:27:01.354980   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.354985   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.354989   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.354996   43102 command_runner.go:130] >     },
	I0425 19:27:01.355002   43102 command_runner.go:130] >     {
	I0425 19:27:01.355018   43102 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0425 19:27:01.355028   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.355035   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0425 19:27:01.355043   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355050   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.355065   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0425 19:27:01.355075   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0425 19:27:01.355081   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355089   43102 command_runner.go:130] >       "size": "117609952",
	I0425 19:27:01.355098   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.355105   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.355114   43102 command_runner.go:130] >       },
	I0425 19:27:01.355126   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.355136   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.355142   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.355150   43102 command_runner.go:130] >     },
	I0425 19:27:01.355155   43102 command_runner.go:130] >     {
	I0425 19:27:01.355163   43102 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0425 19:27:01.355169   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.355181   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0425 19:27:01.355190   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355197   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.355213   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0425 19:27:01.355229   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0425 19:27:01.355237   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355243   43102 command_runner.go:130] >       "size": "112170310",
	I0425 19:27:01.355249   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.355254   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.355263   43102 command_runner.go:130] >       },
	I0425 19:27:01.355270   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.355280   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.355289   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.355294   43102 command_runner.go:130] >     },
	I0425 19:27:01.355300   43102 command_runner.go:130] >     {
	I0425 19:27:01.355312   43102 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0425 19:27:01.355322   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.355328   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0425 19:27:01.355332   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355337   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.355365   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0425 19:27:01.355382   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0425 19:27:01.355390   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355397   43102 command_runner.go:130] >       "size": "85932953",
	I0425 19:27:01.355405   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.355412   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.355417   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.355421   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.355423   43102 command_runner.go:130] >     },
	I0425 19:27:01.355429   43102 command_runner.go:130] >     {
	I0425 19:27:01.355439   43102 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0425 19:27:01.355445   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.355453   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0425 19:27:01.355459   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355466   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.355481   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0425 19:27:01.355495   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0425 19:27:01.355502   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355507   43102 command_runner.go:130] >       "size": "63026502",
	I0425 19:27:01.355515   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.355521   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.355530   43102 command_runner.go:130] >       },
	I0425 19:27:01.355537   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.355546   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.355553   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.355560   43102 command_runner.go:130] >     },
	I0425 19:27:01.355565   43102 command_runner.go:130] >     {
	I0425 19:27:01.355576   43102 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0425 19:27:01.355588   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.355598   43102 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0425 19:27:01.355605   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355616   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.355628   43102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0425 19:27:01.355643   43102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0425 19:27:01.355651   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355658   43102 command_runner.go:130] >       "size": "750414",
	I0425 19:27:01.355666   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.355674   43102 command_runner.go:130] >         "value": "65535"
	I0425 19:27:01.355685   43102 command_runner.go:130] >       },
	I0425 19:27:01.355692   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.355701   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.355707   43102 command_runner.go:130] >       "pinned": true
	I0425 19:27:01.355716   43102 command_runner.go:130] >     }
	I0425 19:27:01.355722   43102 command_runner.go:130] >   ]
	I0425 19:27:01.355731   43102 command_runner.go:130] > }
	I0425 19:27:01.356065   43102 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:27:01.356088   43102 crio.go:433] Images already preloaded, skipping extraction
	I0425 19:27:01.356148   43102 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:27:01.399050   43102 command_runner.go:130] > {
	I0425 19:27:01.399071   43102 command_runner.go:130] >   "images": [
	I0425 19:27:01.399075   43102 command_runner.go:130] >     {
	I0425 19:27:01.399083   43102 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0425 19:27:01.399087   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399093   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0425 19:27:01.399096   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399100   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399108   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0425 19:27:01.399115   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0425 19:27:01.399118   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399123   43102 command_runner.go:130] >       "size": "65291810",
	I0425 19:27:01.399126   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.399130   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399140   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399144   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399152   43102 command_runner.go:130] >     },
	I0425 19:27:01.399156   43102 command_runner.go:130] >     {
	I0425 19:27:01.399162   43102 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0425 19:27:01.399170   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399175   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0425 19:27:01.399178   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399183   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399190   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0425 19:27:01.399204   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0425 19:27:01.399207   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399211   43102 command_runner.go:130] >       "size": "1363676",
	I0425 19:27:01.399215   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.399225   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399231   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399236   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399242   43102 command_runner.go:130] >     },
	I0425 19:27:01.399245   43102 command_runner.go:130] >     {
	I0425 19:27:01.399253   43102 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0425 19:27:01.399258   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399265   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0425 19:27:01.399272   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399276   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399286   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0425 19:27:01.399294   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0425 19:27:01.399301   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399305   43102 command_runner.go:130] >       "size": "31470524",
	I0425 19:27:01.399311   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.399325   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399335   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399339   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399343   43102 command_runner.go:130] >     },
	I0425 19:27:01.399346   43102 command_runner.go:130] >     {
	I0425 19:27:01.399353   43102 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0425 19:27:01.399359   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399364   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0425 19:27:01.399371   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399379   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399389   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0425 19:27:01.399421   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0425 19:27:01.399430   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399434   43102 command_runner.go:130] >       "size": "61245718",
	I0425 19:27:01.399439   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.399443   43102 command_runner.go:130] >       "username": "nonroot",
	I0425 19:27:01.399449   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399456   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399460   43102 command_runner.go:130] >     },
	I0425 19:27:01.399465   43102 command_runner.go:130] >     {
	I0425 19:27:01.399472   43102 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0425 19:27:01.399478   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399483   43102 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0425 19:27:01.399488   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399492   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399502   43102 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0425 19:27:01.399511   43102 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0425 19:27:01.399517   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399521   43102 command_runner.go:130] >       "size": "150779692",
	I0425 19:27:01.399527   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.399531   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.399535   43102 command_runner.go:130] >       },
	I0425 19:27:01.399539   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399543   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399550   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399553   43102 command_runner.go:130] >     },
	I0425 19:27:01.399560   43102 command_runner.go:130] >     {
	I0425 19:27:01.399566   43102 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0425 19:27:01.399572   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399578   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0425 19:27:01.399584   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399587   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399597   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0425 19:27:01.399606   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0425 19:27:01.399612   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399621   43102 command_runner.go:130] >       "size": "117609952",
	I0425 19:27:01.399628   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.399632   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.399641   43102 command_runner.go:130] >       },
	I0425 19:27:01.399648   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399653   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399659   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399663   43102 command_runner.go:130] >     },
	I0425 19:27:01.399669   43102 command_runner.go:130] >     {
	I0425 19:27:01.399675   43102 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0425 19:27:01.399681   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399687   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0425 19:27:01.399693   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399697   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399706   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0425 19:27:01.399719   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0425 19:27:01.399733   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399743   43102 command_runner.go:130] >       "size": "112170310",
	I0425 19:27:01.399749   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.399760   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.399768   43102 command_runner.go:130] >       },
	I0425 19:27:01.399774   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399783   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399789   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399797   43102 command_runner.go:130] >     },
	I0425 19:27:01.399802   43102 command_runner.go:130] >     {
	I0425 19:27:01.399815   43102 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0425 19:27:01.399823   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399832   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0425 19:27:01.399841   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399847   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399874   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0425 19:27:01.399885   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0425 19:27:01.399890   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399894   43102 command_runner.go:130] >       "size": "85932953",
	I0425 19:27:01.399901   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.399909   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399916   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399920   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399926   43102 command_runner.go:130] >     },
	I0425 19:27:01.399930   43102 command_runner.go:130] >     {
	I0425 19:27:01.399938   43102 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0425 19:27:01.399944   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399949   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0425 19:27:01.399952   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399958   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399965   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0425 19:27:01.399988   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0425 19:27:01.399997   43102 command_runner.go:130] >       ],
	I0425 19:27:01.400001   43102 command_runner.go:130] >       "size": "63026502",
	I0425 19:27:01.400007   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.400012   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.400018   43102 command_runner.go:130] >       },
	I0425 19:27:01.400022   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.400029   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.400033   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.400036   43102 command_runner.go:130] >     },
	I0425 19:27:01.400040   43102 command_runner.go:130] >     {
	I0425 19:27:01.400047   43102 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0425 19:27:01.400053   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.400057   43102 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0425 19:27:01.400063   43102 command_runner.go:130] >       ],
	I0425 19:27:01.400068   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.400076   43102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0425 19:27:01.400087   43102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0425 19:27:01.400093   43102 command_runner.go:130] >       ],
	I0425 19:27:01.400097   43102 command_runner.go:130] >       "size": "750414",
	I0425 19:27:01.400103   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.400107   43102 command_runner.go:130] >         "value": "65535"
	I0425 19:27:01.400113   43102 command_runner.go:130] >       },
	I0425 19:27:01.400117   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.400121   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.400130   43102 command_runner.go:130] >       "pinned": true
	I0425 19:27:01.400136   43102 command_runner.go:130] >     }
	I0425 19:27:01.400140   43102 command_runner.go:130] >   ]
	I0425 19:27:01.400145   43102 command_runner.go:130] > }
	I0425 19:27:01.400255   43102 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:27:01.400265   43102 cache_images.go:84] Images are preloaded, skipping loading
	I0425 19:27:01.400273   43102 kubeadm.go:928] updating node { 192.168.39.194 8443 v1.30.0 crio true true} ...
	I0425 19:27:01.400378   43102 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-857482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-857482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 19:27:01.400444   43102 ssh_runner.go:195] Run: crio config
	I0425 19:27:01.444538   43102 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0425 19:27:01.444571   43102 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0425 19:27:01.444582   43102 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0425 19:27:01.444587   43102 command_runner.go:130] > #
	I0425 19:27:01.444598   43102 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0425 19:27:01.444607   43102 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0425 19:27:01.444618   43102 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0425 19:27:01.444630   43102 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0425 19:27:01.444636   43102 command_runner.go:130] > # reload'.
	I0425 19:27:01.444650   43102 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0425 19:27:01.444661   43102 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0425 19:27:01.444674   43102 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0425 19:27:01.444685   43102 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0425 19:27:01.444694   43102 command_runner.go:130] > [crio]
	I0425 19:27:01.444705   43102 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0425 19:27:01.444715   43102 command_runner.go:130] > # containers images, in this directory.
	I0425 19:27:01.444722   43102 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0425 19:27:01.444753   43102 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0425 19:27:01.444764   43102 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0425 19:27:01.444775   43102 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0425 19:27:01.444785   43102 command_runner.go:130] > # imagestore = ""
	I0425 19:27:01.444795   43102 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0425 19:27:01.444807   43102 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0425 19:27:01.444817   43102 command_runner.go:130] > storage_driver = "overlay"
	I0425 19:27:01.444830   43102 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0425 19:27:01.444840   43102 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0425 19:27:01.444847   43102 command_runner.go:130] > storage_option = [
	I0425 19:27:01.444858   43102 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0425 19:27:01.444866   43102 command_runner.go:130] > ]
	I0425 19:27:01.444875   43102 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0425 19:27:01.444887   43102 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0425 19:27:01.444897   43102 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0425 19:27:01.444905   43102 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0425 19:27:01.444918   43102 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0425 19:27:01.444928   43102 command_runner.go:130] > # always happen on a node reboot
	I0425 19:27:01.444937   43102 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0425 19:27:01.444960   43102 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0425 19:27:01.444973   43102 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0425 19:27:01.444983   43102 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0425 19:27:01.444991   43102 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0425 19:27:01.445005   43102 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0425 19:27:01.445017   43102 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0425 19:27:01.445027   43102 command_runner.go:130] > # internal_wipe = true
	I0425 19:27:01.445039   43102 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0425 19:27:01.445050   43102 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0425 19:27:01.445056   43102 command_runner.go:130] > # internal_repair = false
	I0425 19:27:01.445068   43102 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0425 19:27:01.445080   43102 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0425 19:27:01.445092   43102 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0425 19:27:01.445103   43102 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0425 19:27:01.445114   43102 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0425 19:27:01.445123   43102 command_runner.go:130] > [crio.api]
	I0425 19:27:01.445132   43102 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0425 19:27:01.445146   43102 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0425 19:27:01.445163   43102 command_runner.go:130] > # IP address on which the stream server will listen.
	I0425 19:27:01.445174   43102 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0425 19:27:01.445186   43102 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0425 19:27:01.445198   43102 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0425 19:27:01.445207   43102 command_runner.go:130] > # stream_port = "0"
	I0425 19:27:01.445218   43102 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0425 19:27:01.445228   43102 command_runner.go:130] > # stream_enable_tls = false
	I0425 19:27:01.445237   43102 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0425 19:27:01.445246   43102 command_runner.go:130] > # stream_idle_timeout = ""
	I0425 19:27:01.445256   43102 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0425 19:27:01.445268   43102 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0425 19:27:01.445275   43102 command_runner.go:130] > # minutes.
	I0425 19:27:01.445283   43102 command_runner.go:130] > # stream_tls_cert = ""
	I0425 19:27:01.445292   43102 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0425 19:27:01.445306   43102 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0425 19:27:01.445325   43102 command_runner.go:130] > # stream_tls_key = ""
	I0425 19:27:01.445338   43102 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0425 19:27:01.445353   43102 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0425 19:27:01.445370   43102 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0425 19:27:01.445380   43102 command_runner.go:130] > # stream_tls_ca = ""
	I0425 19:27:01.445391   43102 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0425 19:27:01.445401   43102 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0425 19:27:01.445412   43102 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0425 19:27:01.445422   43102 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0425 19:27:01.445431   43102 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0425 19:27:01.445442   43102 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0425 19:27:01.445451   43102 command_runner.go:130] > [crio.runtime]
	I0425 19:27:01.445461   43102 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0425 19:27:01.445472   43102 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0425 19:27:01.445481   43102 command_runner.go:130] > # "nofile=1024:2048"
	I0425 19:27:01.445491   43102 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0425 19:27:01.445500   43102 command_runner.go:130] > # default_ulimits = [
	I0425 19:27:01.445505   43102 command_runner.go:130] > # ]
	I0425 19:27:01.445515   43102 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0425 19:27:01.445524   43102 command_runner.go:130] > # no_pivot = false
	I0425 19:27:01.445533   43102 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0425 19:27:01.445547   43102 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0425 19:27:01.445559   43102 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0425 19:27:01.445578   43102 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0425 19:27:01.445589   43102 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0425 19:27:01.445603   43102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0425 19:27:01.445611   43102 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0425 19:27:01.445622   43102 command_runner.go:130] > # Cgroup setting for conmon
	I0425 19:27:01.445637   43102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0425 19:27:01.445649   43102 command_runner.go:130] > conmon_cgroup = "pod"
	I0425 19:27:01.445663   43102 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0425 19:27:01.445675   43102 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0425 19:27:01.445689   43102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0425 19:27:01.445699   43102 command_runner.go:130] > conmon_env = [
	I0425 19:27:01.445708   43102 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0425 19:27:01.445717   43102 command_runner.go:130] > ]
	I0425 19:27:01.445725   43102 command_runner.go:130] > # Additional environment variables to set for all the
	I0425 19:27:01.445736   43102 command_runner.go:130] > # containers. These are overridden if set in the
	I0425 19:27:01.445750   43102 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0425 19:27:01.445759   43102 command_runner.go:130] > # default_env = [
	I0425 19:27:01.445764   43102 command_runner.go:130] > # ]
	I0425 19:27:01.445777   43102 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0425 19:27:01.445792   43102 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0425 19:27:01.445801   43102 command_runner.go:130] > # selinux = false
	I0425 19:27:01.445812   43102 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0425 19:27:01.445826   43102 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0425 19:27:01.445839   43102 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0425 19:27:01.445849   43102 command_runner.go:130] > # seccomp_profile = ""
	I0425 19:27:01.445859   43102 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0425 19:27:01.445873   43102 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0425 19:27:01.445885   43102 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0425 19:27:01.445893   43102 command_runner.go:130] > # which might increase security.
	I0425 19:27:01.445902   43102 command_runner.go:130] > # This option is currently deprecated,
	I0425 19:27:01.445912   43102 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0425 19:27:01.445923   43102 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0425 19:27:01.445936   43102 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0425 19:27:01.445947   43102 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0425 19:27:01.445961   43102 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0425 19:27:01.445974   43102 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0425 19:27:01.445984   43102 command_runner.go:130] > # This option supports live configuration reload.
	I0425 19:27:01.445996   43102 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0425 19:27:01.446007   43102 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0425 19:27:01.446017   43102 command_runner.go:130] > # the cgroup blockio controller.
	I0425 19:27:01.446025   43102 command_runner.go:130] > # blockio_config_file = ""
	I0425 19:27:01.446039   43102 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0425 19:27:01.446049   43102 command_runner.go:130] > # blockio parameters.
	I0425 19:27:01.446057   43102 command_runner.go:130] > # blockio_reload = false
	I0425 19:27:01.446071   43102 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0425 19:27:01.446081   43102 command_runner.go:130] > # irqbalance daemon.
	I0425 19:27:01.446090   43102 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0425 19:27:01.446103   43102 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0425 19:27:01.446117   43102 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0425 19:27:01.446131   43102 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0425 19:27:01.446143   43102 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0425 19:27:01.446158   43102 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0425 19:27:01.446170   43102 command_runner.go:130] > # This option supports live configuration reload.
	I0425 19:27:01.446176   43102 command_runner.go:130] > # rdt_config_file = ""
	I0425 19:27:01.446187   43102 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0425 19:27:01.446198   43102 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0425 19:27:01.446231   43102 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0425 19:27:01.446242   43102 command_runner.go:130] > # separate_pull_cgroup = ""
	I0425 19:27:01.446252   43102 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0425 19:27:01.446264   43102 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0425 19:27:01.446273   43102 command_runner.go:130] > # will be added.
	I0425 19:27:01.446281   43102 command_runner.go:130] > # default_capabilities = [
	I0425 19:27:01.446289   43102 command_runner.go:130] > # 	"CHOWN",
	I0425 19:27:01.446295   43102 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0425 19:27:01.446304   43102 command_runner.go:130] > # 	"FSETID",
	I0425 19:27:01.446308   43102 command_runner.go:130] > # 	"FOWNER",
	I0425 19:27:01.446319   43102 command_runner.go:130] > # 	"SETGID",
	I0425 19:27:01.446328   43102 command_runner.go:130] > # 	"SETUID",
	I0425 19:27:01.446333   43102 command_runner.go:130] > # 	"SETPCAP",
	I0425 19:27:01.446342   43102 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0425 19:27:01.446347   43102 command_runner.go:130] > # 	"KILL",
	I0425 19:27:01.446355   43102 command_runner.go:130] > # ]
	I0425 19:27:01.446366   43102 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0425 19:27:01.446380   43102 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0425 19:27:01.446390   43102 command_runner.go:130] > # add_inheritable_capabilities = false
	I0425 19:27:01.446404   43102 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0425 19:27:01.446419   43102 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0425 19:27:01.446425   43102 command_runner.go:130] > default_sysctls = [
	I0425 19:27:01.446440   43102 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0425 19:27:01.446448   43102 command_runner.go:130] > ]
	I0425 19:27:01.446456   43102 command_runner.go:130] > # List of devices on the host that a
	I0425 19:27:01.446471   43102 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0425 19:27:01.446481   43102 command_runner.go:130] > # allowed_devices = [
	I0425 19:27:01.446489   43102 command_runner.go:130] > # 	"/dev/fuse",
	I0425 19:27:01.446498   43102 command_runner.go:130] > # ]
	I0425 19:27:01.446506   43102 command_runner.go:130] > # List of additional devices. specified as
	I0425 19:27:01.446520   43102 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0425 19:27:01.446536   43102 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0425 19:27:01.446549   43102 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0425 19:27:01.446559   43102 command_runner.go:130] > # additional_devices = [
	I0425 19:27:01.446564   43102 command_runner.go:130] > # ]
	I0425 19:27:01.446574   43102 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0425 19:27:01.446584   43102 command_runner.go:130] > # cdi_spec_dirs = [
	I0425 19:27:01.446589   43102 command_runner.go:130] > # 	"/etc/cdi",
	I0425 19:27:01.446599   43102 command_runner.go:130] > # 	"/var/run/cdi",
	I0425 19:27:01.446604   43102 command_runner.go:130] > # ]
	I0425 19:27:01.446615   43102 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0425 19:27:01.446628   43102 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0425 19:27:01.446639   43102 command_runner.go:130] > # Defaults to false.
	I0425 19:27:01.446649   43102 command_runner.go:130] > # device_ownership_from_security_context = false
	I0425 19:27:01.446662   43102 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0425 19:27:01.446675   43102 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0425 19:27:01.446684   43102 command_runner.go:130] > # hooks_dir = [
	I0425 19:27:01.446691   43102 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0425 19:27:01.446699   43102 command_runner.go:130] > # ]
	I0425 19:27:01.446708   43102 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0425 19:27:01.446721   43102 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0425 19:27:01.446732   43102 command_runner.go:130] > # its default mounts from the following two files:
	I0425 19:27:01.446741   43102 command_runner.go:130] > #
	I0425 19:27:01.446750   43102 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0425 19:27:01.446764   43102 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0425 19:27:01.446775   43102 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0425 19:27:01.446783   43102 command_runner.go:130] > #
	I0425 19:27:01.446792   43102 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0425 19:27:01.446805   43102 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0425 19:27:01.446818   43102 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0425 19:27:01.446828   43102 command_runner.go:130] > #      only add mounts it finds in this file.
	I0425 19:27:01.446834   43102 command_runner.go:130] > #
	I0425 19:27:01.446841   43102 command_runner.go:130] > # default_mounts_file = ""
	I0425 19:27:01.446852   43102 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0425 19:27:01.446870   43102 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0425 19:27:01.446879   43102 command_runner.go:130] > pids_limit = 1024
	I0425 19:27:01.446889   43102 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0425 19:27:01.446903   43102 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0425 19:27:01.446916   43102 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0425 19:27:01.446931   43102 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0425 19:27:01.446941   43102 command_runner.go:130] > # log_size_max = -1
	I0425 19:27:01.446953   43102 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0425 19:27:01.446964   43102 command_runner.go:130] > # log_to_journald = false
	I0425 19:27:01.446977   43102 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0425 19:27:01.446985   43102 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0425 19:27:01.446997   43102 command_runner.go:130] > # Path to directory for container attach sockets.
	I0425 19:27:01.447005   43102 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0425 19:27:01.447018   43102 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0425 19:27:01.447028   43102 command_runner.go:130] > # bind_mount_prefix = ""
	I0425 19:27:01.447037   43102 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0425 19:27:01.447047   43102 command_runner.go:130] > # read_only = false
	I0425 19:27:01.447056   43102 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0425 19:27:01.447069   43102 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0425 19:27:01.447078   43102 command_runner.go:130] > # live configuration reload.
	I0425 19:27:01.447087   43102 command_runner.go:130] > # log_level = "info"
	I0425 19:27:01.447097   43102 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0425 19:27:01.447108   43102 command_runner.go:130] > # This option supports live configuration reload.
	I0425 19:27:01.447116   43102 command_runner.go:130] > # log_filter = ""
	I0425 19:27:01.447128   43102 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0425 19:27:01.447140   43102 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0425 19:27:01.447150   43102 command_runner.go:130] > # separated by comma.
	I0425 19:27:01.447162   43102 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0425 19:27:01.447171   43102 command_runner.go:130] > # uid_mappings = ""
	I0425 19:27:01.447181   43102 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0425 19:27:01.447196   43102 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0425 19:27:01.447205   43102 command_runner.go:130] > # separated by comma.
	I0425 19:27:01.447216   43102 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0425 19:27:01.447226   43102 command_runner.go:130] > # gid_mappings = ""
	I0425 19:27:01.447236   43102 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0425 19:27:01.447249   43102 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0425 19:27:01.447265   43102 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0425 19:27:01.447280   43102 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0425 19:27:01.447290   43102 command_runner.go:130] > # minimum_mappable_uid = -1
	I0425 19:27:01.447301   43102 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0425 19:27:01.447319   43102 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0425 19:27:01.447331   43102 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0425 19:27:01.447345   43102 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0425 19:27:01.447351   43102 command_runner.go:130] > # minimum_mappable_gid = -1
	I0425 19:27:01.447364   43102 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0425 19:27:01.447376   43102 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0425 19:27:01.447388   43102 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0425 19:27:01.447398   43102 command_runner.go:130] > # ctr_stop_timeout = 30
	I0425 19:27:01.447406   43102 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0425 19:27:01.447420   43102 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0425 19:27:01.447431   43102 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0425 19:27:01.447441   43102 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0425 19:27:01.447452   43102 command_runner.go:130] > drop_infra_ctr = false
	I0425 19:27:01.447465   43102 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0425 19:27:01.447479   43102 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0425 19:27:01.447496   43102 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0425 19:27:01.447506   43102 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0425 19:27:01.447516   43102 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0425 19:27:01.447530   43102 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0425 19:27:01.447542   43102 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0425 19:27:01.447554   43102 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0425 19:27:01.447560   43102 command_runner.go:130] > # shared_cpuset = ""
	I0425 19:27:01.447575   43102 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0425 19:27:01.447586   43102 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0425 19:27:01.447596   43102 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0425 19:27:01.447606   43102 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0425 19:27:01.447616   43102 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0425 19:27:01.447626   43102 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0425 19:27:01.447638   43102 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0425 19:27:01.447648   43102 command_runner.go:130] > # enable_criu_support = false
	I0425 19:27:01.447656   43102 command_runner.go:130] > # Enable/disable the generation of the container,
	I0425 19:27:01.447673   43102 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0425 19:27:01.447682   43102 command_runner.go:130] > # enable_pod_events = false
	I0425 19:27:01.447693   43102 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0425 19:27:01.447706   43102 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0425 19:27:01.447719   43102 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0425 19:27:01.447728   43102 command_runner.go:130] > # default_runtime = "runc"
	I0425 19:27:01.447736   43102 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0425 19:27:01.447750   43102 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0425 19:27:01.447763   43102 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0425 19:27:01.447774   43102 command_runner.go:130] > # creation as a file is not desired either.
	I0425 19:27:01.447786   43102 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0425 19:27:01.447797   43102 command_runner.go:130] > # the hostname is being managed dynamically.
	I0425 19:27:01.447806   43102 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0425 19:27:01.447815   43102 command_runner.go:130] > # ]
	I0425 19:27:01.447826   43102 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0425 19:27:01.447840   43102 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0425 19:27:01.447851   43102 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0425 19:27:01.447863   43102 command_runner.go:130] > # Each entry in the table should follow the format:
	I0425 19:27:01.447870   43102 command_runner.go:130] > #
	I0425 19:27:01.447877   43102 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0425 19:27:01.447885   43102 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0425 19:27:01.447907   43102 command_runner.go:130] > # runtime_type = "oci"
	I0425 19:27:01.447916   43102 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0425 19:27:01.447924   43102 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0425 19:27:01.447933   43102 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0425 19:27:01.447940   43102 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0425 19:27:01.447949   43102 command_runner.go:130] > # monitor_env = []
	I0425 19:27:01.447956   43102 command_runner.go:130] > # privileged_without_host_devices = false
	I0425 19:27:01.447967   43102 command_runner.go:130] > # allowed_annotations = []
	I0425 19:27:01.447979   43102 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0425 19:27:01.447988   43102 command_runner.go:130] > # Where:
	I0425 19:27:01.447996   43102 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0425 19:27:01.448009   43102 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0425 19:27:01.448018   43102 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0425 19:27:01.448031   43102 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0425 19:27:01.448039   43102 command_runner.go:130] > #   in $PATH.
	I0425 19:27:01.448049   43102 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0425 19:27:01.448059   43102 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0425 19:27:01.448077   43102 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0425 19:27:01.448085   43102 command_runner.go:130] > #   state.
	I0425 19:27:01.448097   43102 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0425 19:27:01.448110   43102 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0425 19:27:01.448121   43102 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0425 19:27:01.448132   43102 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0425 19:27:01.448146   43102 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0425 19:27:01.448160   43102 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0425 19:27:01.448170   43102 command_runner.go:130] > #   The currently recognized values are:
	I0425 19:27:01.448182   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0425 19:27:01.448197   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0425 19:27:01.448208   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0425 19:27:01.448219   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0425 19:27:01.448233   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0425 19:27:01.448245   43102 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0425 19:27:01.448258   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0425 19:27:01.448270   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0425 19:27:01.448282   43102 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0425 19:27:01.448294   43102 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0425 19:27:01.448301   43102 command_runner.go:130] > #   deprecated option "conmon".
	I0425 19:27:01.448321   43102 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0425 19:27:01.448333   43102 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0425 19:27:01.448348   43102 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0425 19:27:01.448359   43102 command_runner.go:130] > #   should be moved to the container's cgroup
	I0425 19:27:01.448372   43102 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0425 19:27:01.448386   43102 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0425 19:27:01.448401   43102 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0425 19:27:01.448415   43102 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0425 19:27:01.448423   43102 command_runner.go:130] > #
	I0425 19:27:01.448429   43102 command_runner.go:130] > # Using the seccomp notifier feature:
	I0425 19:27:01.448437   43102 command_runner.go:130] > #
	I0425 19:27:01.448446   43102 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0425 19:27:01.448459   43102 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0425 19:27:01.448467   43102 command_runner.go:130] > #
	I0425 19:27:01.448480   43102 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0425 19:27:01.448491   43102 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0425 19:27:01.448501   43102 command_runner.go:130] > #
	I0425 19:27:01.448512   43102 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0425 19:27:01.448523   43102 command_runner.go:130] > # feature.
	I0425 19:27:01.448532   43102 command_runner.go:130] > #
	I0425 19:27:01.448541   43102 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0425 19:27:01.448554   43102 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0425 19:27:01.448566   43102 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0425 19:27:01.448579   43102 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0425 19:27:01.448590   43102 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0425 19:27:01.448597   43102 command_runner.go:130] > #
	I0425 19:27:01.448606   43102 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0425 19:27:01.448619   43102 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0425 19:27:01.448627   43102 command_runner.go:130] > #
	I0425 19:27:01.448637   43102 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0425 19:27:01.448647   43102 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0425 19:27:01.448653   43102 command_runner.go:130] > #
	I0425 19:27:01.448658   43102 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0425 19:27:01.448667   43102 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0425 19:27:01.448671   43102 command_runner.go:130] > # limitation.
	I0425 19:27:01.448677   43102 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0425 19:27:01.448682   43102 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0425 19:27:01.448686   43102 command_runner.go:130] > runtime_type = "oci"
	I0425 19:27:01.448690   43102 command_runner.go:130] > runtime_root = "/run/runc"
	I0425 19:27:01.448696   43102 command_runner.go:130] > runtime_config_path = ""
	I0425 19:27:01.448700   43102 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0425 19:27:01.448706   43102 command_runner.go:130] > monitor_cgroup = "pod"
	I0425 19:27:01.448710   43102 command_runner.go:130] > monitor_exec_cgroup = ""
	I0425 19:27:01.448714   43102 command_runner.go:130] > monitor_env = [
	I0425 19:27:01.448726   43102 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0425 19:27:01.448733   43102 command_runner.go:130] > ]
	I0425 19:27:01.448744   43102 command_runner.go:130] > privileged_without_host_devices = false
	I0425 19:27:01.448757   43102 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0425 19:27:01.448768   43102 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0425 19:27:01.448778   43102 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0425 19:27:01.448793   43102 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0425 19:27:01.448808   43102 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0425 19:27:01.448820   43102 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0425 19:27:01.448845   43102 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0425 19:27:01.448861   43102 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0425 19:27:01.448876   43102 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0425 19:27:01.448890   43102 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0425 19:27:01.448899   43102 command_runner.go:130] > # Example:
	I0425 19:27:01.448907   43102 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0425 19:27:01.448917   43102 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0425 19:27:01.448923   43102 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0425 19:27:01.448929   43102 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0425 19:27:01.448932   43102 command_runner.go:130] > # cpuset = 0
	I0425 19:27:01.448937   43102 command_runner.go:130] > # cpushares = "0-1"
	I0425 19:27:01.448941   43102 command_runner.go:130] > # Where:
	I0425 19:27:01.448945   43102 command_runner.go:130] > # The workload name is workload-type.
	I0425 19:27:01.448954   43102 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0425 19:27:01.448961   43102 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0425 19:27:01.448966   43102 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0425 19:27:01.448976   43102 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0425 19:27:01.448982   43102 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0425 19:27:01.448989   43102 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0425 19:27:01.448995   43102 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0425 19:27:01.449002   43102 command_runner.go:130] > # Default value is set to true
	I0425 19:27:01.449007   43102 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0425 19:27:01.449014   43102 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0425 19:27:01.449019   43102 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0425 19:27:01.449023   43102 command_runner.go:130] > # Default value is set to 'false'
	I0425 19:27:01.449029   43102 command_runner.go:130] > # disable_hostport_mapping = false
	I0425 19:27:01.449036   43102 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0425 19:27:01.449042   43102 command_runner.go:130] > #
	I0425 19:27:01.449047   43102 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0425 19:27:01.449055   43102 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0425 19:27:01.449061   43102 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0425 19:27:01.449067   43102 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0425 19:27:01.449072   43102 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0425 19:27:01.449075   43102 command_runner.go:130] > [crio.image]
	I0425 19:27:01.449081   43102 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0425 19:27:01.449085   43102 command_runner.go:130] > # default_transport = "docker://"
	I0425 19:27:01.449093   43102 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0425 19:27:01.449099   43102 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0425 19:27:01.449103   43102 command_runner.go:130] > # global_auth_file = ""
	I0425 19:27:01.449108   43102 command_runner.go:130] > # The image used to instantiate infra containers.
	I0425 19:27:01.449112   43102 command_runner.go:130] > # This option supports live configuration reload.
	I0425 19:27:01.449117   43102 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0425 19:27:01.449123   43102 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0425 19:27:01.449128   43102 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0425 19:27:01.449133   43102 command_runner.go:130] > # This option supports live configuration reload.
	I0425 19:27:01.449137   43102 command_runner.go:130] > # pause_image_auth_file = ""
	I0425 19:27:01.449142   43102 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0425 19:27:01.449150   43102 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0425 19:27:01.449156   43102 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0425 19:27:01.449163   43102 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0425 19:27:01.449167   43102 command_runner.go:130] > # pause_command = "/pause"
	I0425 19:27:01.449176   43102 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0425 19:27:01.449182   43102 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0425 19:27:01.449187   43102 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0425 19:27:01.449193   43102 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0425 19:27:01.449201   43102 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0425 19:27:01.449207   43102 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0425 19:27:01.449213   43102 command_runner.go:130] > # pinned_images = [
	I0425 19:27:01.449217   43102 command_runner.go:130] > # ]
	I0425 19:27:01.449223   43102 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0425 19:27:01.449233   43102 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0425 19:27:01.449240   43102 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0425 19:27:01.449247   43102 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0425 19:27:01.449252   43102 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0425 19:27:01.449258   43102 command_runner.go:130] > # signature_policy = ""
	I0425 19:27:01.449264   43102 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0425 19:27:01.449274   43102 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0425 19:27:01.449279   43102 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0425 19:27:01.449288   43102 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0425 19:27:01.449293   43102 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0425 19:27:01.449297   43102 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0425 19:27:01.449305   43102 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0425 19:27:01.449318   43102 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0425 19:27:01.449323   43102 command_runner.go:130] > # changing them here.
	I0425 19:27:01.449327   43102 command_runner.go:130] > # insecure_registries = [
	I0425 19:27:01.449330   43102 command_runner.go:130] > # ]
	I0425 19:27:01.449336   43102 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0425 19:27:01.449342   43102 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0425 19:27:01.449346   43102 command_runner.go:130] > # image_volumes = "mkdir"
	I0425 19:27:01.449350   43102 command_runner.go:130] > # Temporary directory to use for storing big files
	I0425 19:27:01.449355   43102 command_runner.go:130] > # big_files_temporary_dir = ""
	I0425 19:27:01.449362   43102 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0425 19:27:01.449366   43102 command_runner.go:130] > # CNI plugins.
	I0425 19:27:01.449370   43102 command_runner.go:130] > [crio.network]
	I0425 19:27:01.449376   43102 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0425 19:27:01.449381   43102 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0425 19:27:01.449385   43102 command_runner.go:130] > # cni_default_network = ""
	I0425 19:27:01.449391   43102 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0425 19:27:01.449397   43102 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0425 19:27:01.449403   43102 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0425 19:27:01.449407   43102 command_runner.go:130] > # plugin_dirs = [
	I0425 19:27:01.449413   43102 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0425 19:27:01.449416   43102 command_runner.go:130] > # ]
	I0425 19:27:01.449422   43102 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0425 19:27:01.449427   43102 command_runner.go:130] > [crio.metrics]
	I0425 19:27:01.449432   43102 command_runner.go:130] > # Globally enable or disable metrics support.
	I0425 19:27:01.449435   43102 command_runner.go:130] > enable_metrics = true
	I0425 19:27:01.449440   43102 command_runner.go:130] > # Specify enabled metrics collectors.
	I0425 19:27:01.449447   43102 command_runner.go:130] > # Per default all metrics are enabled.
	I0425 19:27:01.449452   43102 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0425 19:27:01.449460   43102 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0425 19:27:01.449465   43102 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0425 19:27:01.449471   43102 command_runner.go:130] > # metrics_collectors = [
	I0425 19:27:01.449475   43102 command_runner.go:130] > # 	"operations",
	I0425 19:27:01.449480   43102 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0425 19:27:01.449487   43102 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0425 19:27:01.449491   43102 command_runner.go:130] > # 	"operations_errors",
	I0425 19:27:01.449495   43102 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0425 19:27:01.449499   43102 command_runner.go:130] > # 	"image_pulls_by_name",
	I0425 19:27:01.449504   43102 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0425 19:27:01.449508   43102 command_runner.go:130] > # 	"image_pulls_failures",
	I0425 19:27:01.449512   43102 command_runner.go:130] > # 	"image_pulls_successes",
	I0425 19:27:01.449519   43102 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0425 19:27:01.449522   43102 command_runner.go:130] > # 	"image_layer_reuse",
	I0425 19:27:01.449527   43102 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0425 19:27:01.449536   43102 command_runner.go:130] > # 	"containers_oom_total",
	I0425 19:27:01.449540   43102 command_runner.go:130] > # 	"containers_oom",
	I0425 19:27:01.449546   43102 command_runner.go:130] > # 	"processes_defunct",
	I0425 19:27:01.449550   43102 command_runner.go:130] > # 	"operations_total",
	I0425 19:27:01.449554   43102 command_runner.go:130] > # 	"operations_latency_seconds",
	I0425 19:27:01.449560   43102 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0425 19:27:01.449564   43102 command_runner.go:130] > # 	"operations_errors_total",
	I0425 19:27:01.449570   43102 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0425 19:27:01.449574   43102 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0425 19:27:01.449579   43102 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0425 19:27:01.449583   43102 command_runner.go:130] > # 	"image_pulls_success_total",
	I0425 19:27:01.449587   43102 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0425 19:27:01.449594   43102 command_runner.go:130] > # 	"containers_oom_count_total",
	I0425 19:27:01.449599   43102 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0425 19:27:01.449605   43102 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0425 19:27:01.449608   43102 command_runner.go:130] > # ]
	I0425 19:27:01.449615   43102 command_runner.go:130] > # The port on which the metrics server will listen.
	I0425 19:27:01.449618   43102 command_runner.go:130] > # metrics_port = 9090
	I0425 19:27:01.449623   43102 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0425 19:27:01.449629   43102 command_runner.go:130] > # metrics_socket = ""
	I0425 19:27:01.449634   43102 command_runner.go:130] > # The certificate for the secure metrics server.
	I0425 19:27:01.449642   43102 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0425 19:27:01.449650   43102 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0425 19:27:01.449657   43102 command_runner.go:130] > # certificate on any modification event.
	I0425 19:27:01.449660   43102 command_runner.go:130] > # metrics_cert = ""
	I0425 19:27:01.449671   43102 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0425 19:27:01.449677   43102 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0425 19:27:01.449681   43102 command_runner.go:130] > # metrics_key = ""
	I0425 19:27:01.449687   43102 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0425 19:27:01.449691   43102 command_runner.go:130] > [crio.tracing]
	I0425 19:27:01.449697   43102 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0425 19:27:01.449703   43102 command_runner.go:130] > # enable_tracing = false
	I0425 19:27:01.449708   43102 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0425 19:27:01.449713   43102 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0425 19:27:01.449722   43102 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0425 19:27:01.449731   43102 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0425 19:27:01.449738   43102 command_runner.go:130] > # CRI-O NRI configuration.
	I0425 19:27:01.449746   43102 command_runner.go:130] > [crio.nri]
	I0425 19:27:01.449753   43102 command_runner.go:130] > # Globally enable or disable NRI.
	I0425 19:27:01.449761   43102 command_runner.go:130] > # enable_nri = false
	I0425 19:27:01.449768   43102 command_runner.go:130] > # NRI socket to listen on.
	I0425 19:27:01.449778   43102 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0425 19:27:01.449785   43102 command_runner.go:130] > # NRI plugin directory to use.
	I0425 19:27:01.449795   43102 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0425 19:27:01.449811   43102 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0425 19:27:01.449821   43102 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0425 19:27:01.449829   43102 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0425 19:27:01.449834   43102 command_runner.go:130] > # nri_disable_connections = false
	I0425 19:27:01.449839   43102 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0425 19:27:01.449846   43102 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0425 19:27:01.449852   43102 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0425 19:27:01.449858   43102 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0425 19:27:01.449864   43102 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0425 19:27:01.449870   43102 command_runner.go:130] > [crio.stats]
	I0425 19:27:01.449876   43102 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0425 19:27:01.449883   43102 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0425 19:27:01.449887   43102 command_runner.go:130] > # stats_collection_period = 0
	I0425 19:27:01.450485   43102 command_runner.go:130] ! time="2024-04-25 19:27:01.415007525Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0425 19:27:01.450507   43102 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0425 19:27:01.450721   43102 cni.go:84] Creating CNI manager for ""
	I0425 19:27:01.450740   43102 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0425 19:27:01.450750   43102 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 19:27:01.450777   43102 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-857482 NodeName:multinode-857482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 19:27:01.450932   43102 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-857482"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 19:27:01.450997   43102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 19:27:01.462219   43102 command_runner.go:130] > kubeadm
	I0425 19:27:01.462246   43102 command_runner.go:130] > kubectl
	I0425 19:27:01.462252   43102 command_runner.go:130] > kubelet
	I0425 19:27:01.462277   43102 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 19:27:01.462329   43102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 19:27:01.472827   43102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0425 19:27:01.492542   43102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 19:27:01.511855   43102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0425 19:27:01.531332   43102 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0425 19:27:01.535938   43102 command_runner.go:130] > 192.168.39.194	control-plane.minikube.internal
	I0425 19:27:01.536002   43102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:27:01.684746   43102 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:27:01.701274   43102 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482 for IP: 192.168.39.194
	I0425 19:27:01.701301   43102 certs.go:194] generating shared ca certs ...
	I0425 19:27:01.701328   43102 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:27:01.701508   43102 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 19:27:01.701551   43102 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 19:27:01.701561   43102 certs.go:256] generating profile certs ...
	I0425 19:27:01.701630   43102 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/client.key
	I0425 19:27:01.701687   43102 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/apiserver.key.8dbc5944
	I0425 19:27:01.701719   43102 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/proxy-client.key
	I0425 19:27:01.701729   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 19:27:01.701767   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 19:27:01.701787   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 19:27:01.701808   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 19:27:01.701828   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 19:27:01.701846   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 19:27:01.701866   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 19:27:01.701879   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 19:27:01.701929   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 19:27:01.701964   43102 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 19:27:01.701974   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 19:27:01.701997   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 19:27:01.702019   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 19:27:01.702039   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 19:27:01.702074   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:27:01.702098   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 19:27:01.702111   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 19:27:01.702123   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:27:01.702668   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 19:27:01.729793   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 19:27:01.755044   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 19:27:01.782051   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 19:27:01.808248   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 19:27:01.833204   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 19:27:01.858215   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 19:27:01.883732   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 19:27:01.908798   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 19:27:01.934362   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 19:27:01.961204   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 19:27:01.988221   43102 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 19:27:02.006712   43102 ssh_runner.go:195] Run: openssl version
	I0425 19:27:02.013131   43102 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0425 19:27:02.013201   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 19:27:02.026190   43102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 19:27:02.031165   43102 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 19:27:02.031229   43102 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 19:27:02.031268   43102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 19:27:02.037172   43102 command_runner.go:130] > 51391683
	I0425 19:27:02.037227   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 19:27:02.047791   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 19:27:02.059885   43102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 19:27:02.064580   43102 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:27:02.064633   43102 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:27:02.064674   43102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 19:27:02.070670   43102 command_runner.go:130] > 3ec20f2e
	I0425 19:27:02.070718   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 19:27:02.081337   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 19:27:02.093634   43102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:27:02.098346   43102 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:27:02.098440   43102 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:27:02.098487   43102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:27:02.104530   43102 command_runner.go:130] > b5213941
	I0425 19:27:02.104589   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 19:27:02.115119   43102 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 19:27:02.120054   43102 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 19:27:02.120076   43102 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0425 19:27:02.120093   43102 command_runner.go:130] > Device: 253,1	Inode: 7339542     Links: 1
	I0425 19:27:02.120104   43102 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0425 19:27:02.120116   43102 command_runner.go:130] > Access: 2024-04-25 19:20:31.246400190 +0000
	I0425 19:27:02.120121   43102 command_runner.go:130] > Modify: 2024-04-25 19:20:31.246400190 +0000
	I0425 19:27:02.120127   43102 command_runner.go:130] > Change: 2024-04-25 19:20:31.246400190 +0000
	I0425 19:27:02.120134   43102 command_runner.go:130] >  Birth: 2024-04-25 19:20:31.246400190 +0000
	I0425 19:27:02.120179   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 19:27:02.126261   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.126314   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 19:27:02.132054   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.132221   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 19:27:02.137928   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.138252   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 19:27:02.144161   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.144192   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 19:27:02.149926   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.149966   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 19:27:02.155689   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.155774   43102 kubeadm.go:391] StartCluster: {Name:multinode-857482 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-857482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:27:02.155893   43102 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 19:27:02.155937   43102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 19:27:02.200225   43102 command_runner.go:130] > 90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1
	I0425 19:27:02.200250   43102 command_runner.go:130] > 45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4
	I0425 19:27:02.200255   43102 command_runner.go:130] > e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653
	I0425 19:27:02.200262   43102 command_runner.go:130] > ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53
	I0425 19:27:02.200267   43102 command_runner.go:130] > a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd
	I0425 19:27:02.200272   43102 command_runner.go:130] > 50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a
	I0425 19:27:02.200277   43102 command_runner.go:130] > 843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e
	I0425 19:27:02.200284   43102 command_runner.go:130] > 374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95
	I0425 19:27:02.200306   43102 cri.go:89] found id: "90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1"
	I0425 19:27:02.200321   43102 cri.go:89] found id: "45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4"
	I0425 19:27:02.200326   43102 cri.go:89] found id: "e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653"
	I0425 19:27:02.200331   43102 cri.go:89] found id: "ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53"
	I0425 19:27:02.200335   43102 cri.go:89] found id: "a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd"
	I0425 19:27:02.200342   43102 cri.go:89] found id: "50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a"
	I0425 19:27:02.200346   43102 cri.go:89] found id: "843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e"
	I0425 19:27:02.200354   43102 cri.go:89] found id: "374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95"
	I0425 19:27:02.200358   43102 cri.go:89] found id: ""
	I0425 19:27:02.200395   43102 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.188873712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714073308188847638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0cc4b26-d7ac-4ff5-93ed-c7ff403d0efa name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.189788308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=294f0555-a44e-4a1b-a3b0-aeaedf12cf14 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.189870429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=294f0555-a44e-4a1b-a3b0-aeaedf12cf14 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.190254968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36b5ce47353cc0c96dc8b5e9a33afd7fb38b5fbeabb96502d852b56825a6cb3d,PodSandboxId:dd50b400fdc3b7ad73cd4c60d7cd079e8fd50dea262c8891ed8fd5c3f1024876,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714073263074706219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cf38fb5a61ccd685210f49586b1838e03e6a8f24e5dd7f90b212b82b98e2f7,PodSandboxId:e6803b765d61e1ba7f49b06b2427aff8568e5224b90549e5b0cb6cadf7a8db44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714073229612261901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d831b8602d86212ca0ef86f76b2a589993ec24594da3f7e115dd7e98fe29fb7,PodSandboxId:48c24d8fe93ade998923e73bcfadaa746d58973624039eb5a4f47fb4c33dbcab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714073229471756466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e36a3005313fbfb55de3fd1442a04ed10cb79094f1a654508afe8d0485ba41,PodSandboxId:0801a5c3e1bad99484d6fc95a29ce72432691a446dbfa2192853f084911be965,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714073229429973720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},An
notations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0f0bec0bfe8c519abee73f140e20eaa46e228490f690075a8a9ea1d0832b2e,PodSandboxId:92d31c8b03469b443faa23fa66606c74548c2f21fc9bf70f79d1a9cb7048c9bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714073229337173054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30-4abec6ea5602,},Annotations:map[string]string{io.ku
bernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a74a0d18b4fffdefa91ae8f49e5820ac816df60ff4fa7be1478ec7b6c96adac,PodSandboxId:83ea89064e01f551e77ddd34e4a5bfa50de0b78bf65e908b300f50ad1ee8f212,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714073224543468174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd00ac37f2b0620a23ff4777f7b8c8f3bd22582d2b0680cc199db51c8d2a2ebe,PodSandboxId:558bff1fcd05bf116d3f65453c9b929ce7b50127cdc80b192983ce8c83f3f9d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714073224505108779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io.kub
ernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fc7eb8c6052e494d92460d7cc331abd9063f24fc7baece9b50aa5349942750,PodSandboxId:9b32c84c32b51e23ce062771f8a8029b513f58e6bb4f05d192ae7c1b198888a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714073224499323025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1ec686ad9a7a298de9bc3ed357f9487b2134662a8b2377fc4b4c63d25701f2,PodSandboxId:814a5b64618a0f32875cce5e29b1a6c717454aa0f3051bcdffb2f27aa2f64d42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714073224389365563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.container.hash: c16a365d,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa56fafcb469dc3396bc085681aa0b6058fdca2f63fda74d7ce625ee56d7b228,PodSandboxId:26e22cc5d185c36ad51b61d52b3a92341a5345bb64ca7086bd1e36f3ca3a65a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714072906141442748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1,PodSandboxId:42269422724f8c49b8316f09efb7256089d87cf294a3d91a8fec646997201ec0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714072857917492767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},Annotations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4,PodSandboxId:c02b4009f3b7819c289e15a9b9634da93780d6f19507bcaa63086c422bf3d779,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714072856649372910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653,PodSandboxId:b7b0f9cfcb0ca0e91e6f3fdddcc5294f2c8623b3fdf7e8f469bc6527ac80ba1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714072854916856256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53,PodSandboxId:88c72015e2cbcb088b9378b418e2b4a8271565f15dbd9426bfcb9c699aed8474,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714072854684172654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30
-4abec6ea5602,},Annotations:map[string]string{io.kubernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd,PodSandboxId:5c59a402a7f40ed8e0574c71e5b2687615ed5b1f218712a1a6e052fa14cc6169,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714072834905751142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a,PodSandboxId:22aa80497e4ffc633cd6fc08d1710f481bcb900a5cac34f13ccce495c06874c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714072834870358706,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.
container.hash: c16a365d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e,PodSandboxId:6c5aba426835e831b909ab93f25b0a25b037ff565dc6fe62ea410d1cee46c1ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714072834823453769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95,PodSandboxId:51b56702f2b0bc5e3e0d647c3647512e86f57eb259b19391739deb2056df9d20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714072834771018602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=294f0555-a44e-4a1b-a3b0-aeaedf12cf14 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.241417699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ff8e50a-fd26-4c17-b368-59daba18d74b name=/runtime.v1.RuntimeService/Version
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.241516816Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ff8e50a-fd26-4c17-b368-59daba18d74b name=/runtime.v1.RuntimeService/Version
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.243284796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83102fc1-abf4-4138-b21f-31beacb61d3f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.243786602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714073308243761747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83102fc1-abf4-4138-b21f-31beacb61d3f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.244247157Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=72b8b529-58f8-4519-af69-2a01e9e3592d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.244336753Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=72b8b529-58f8-4519-af69-2a01e9e3592d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.244838326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36b5ce47353cc0c96dc8b5e9a33afd7fb38b5fbeabb96502d852b56825a6cb3d,PodSandboxId:dd50b400fdc3b7ad73cd4c60d7cd079e8fd50dea262c8891ed8fd5c3f1024876,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714073263074706219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cf38fb5a61ccd685210f49586b1838e03e6a8f24e5dd7f90b212b82b98e2f7,PodSandboxId:e6803b765d61e1ba7f49b06b2427aff8568e5224b90549e5b0cb6cadf7a8db44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714073229612261901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d831b8602d86212ca0ef86f76b2a589993ec24594da3f7e115dd7e98fe29fb7,PodSandboxId:48c24d8fe93ade998923e73bcfadaa746d58973624039eb5a4f47fb4c33dbcab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714073229471756466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e36a3005313fbfb55de3fd1442a04ed10cb79094f1a654508afe8d0485ba41,PodSandboxId:0801a5c3e1bad99484d6fc95a29ce72432691a446dbfa2192853f084911be965,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714073229429973720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},An
notations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0f0bec0bfe8c519abee73f140e20eaa46e228490f690075a8a9ea1d0832b2e,PodSandboxId:92d31c8b03469b443faa23fa66606c74548c2f21fc9bf70f79d1a9cb7048c9bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714073229337173054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30-4abec6ea5602,},Annotations:map[string]string{io.ku
bernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a74a0d18b4fffdefa91ae8f49e5820ac816df60ff4fa7be1478ec7b6c96adac,PodSandboxId:83ea89064e01f551e77ddd34e4a5bfa50de0b78bf65e908b300f50ad1ee8f212,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714073224543468174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd00ac37f2b0620a23ff4777f7b8c8f3bd22582d2b0680cc199db51c8d2a2ebe,PodSandboxId:558bff1fcd05bf116d3f65453c9b929ce7b50127cdc80b192983ce8c83f3f9d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714073224505108779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io.kub
ernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fc7eb8c6052e494d92460d7cc331abd9063f24fc7baece9b50aa5349942750,PodSandboxId:9b32c84c32b51e23ce062771f8a8029b513f58e6bb4f05d192ae7c1b198888a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714073224499323025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1ec686ad9a7a298de9bc3ed357f9487b2134662a8b2377fc4b4c63d25701f2,PodSandboxId:814a5b64618a0f32875cce5e29b1a6c717454aa0f3051bcdffb2f27aa2f64d42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714073224389365563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.container.hash: c16a365d,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa56fafcb469dc3396bc085681aa0b6058fdca2f63fda74d7ce625ee56d7b228,PodSandboxId:26e22cc5d185c36ad51b61d52b3a92341a5345bb64ca7086bd1e36f3ca3a65a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714072906141442748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1,PodSandboxId:42269422724f8c49b8316f09efb7256089d87cf294a3d91a8fec646997201ec0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714072857917492767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},Annotations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4,PodSandboxId:c02b4009f3b7819c289e15a9b9634da93780d6f19507bcaa63086c422bf3d779,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714072856649372910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653,PodSandboxId:b7b0f9cfcb0ca0e91e6f3fdddcc5294f2c8623b3fdf7e8f469bc6527ac80ba1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714072854916856256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53,PodSandboxId:88c72015e2cbcb088b9378b418e2b4a8271565f15dbd9426bfcb9c699aed8474,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714072854684172654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30
-4abec6ea5602,},Annotations:map[string]string{io.kubernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd,PodSandboxId:5c59a402a7f40ed8e0574c71e5b2687615ed5b1f218712a1a6e052fa14cc6169,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714072834905751142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a,PodSandboxId:22aa80497e4ffc633cd6fc08d1710f481bcb900a5cac34f13ccce495c06874c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714072834870358706,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.
container.hash: c16a365d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e,PodSandboxId:6c5aba426835e831b909ab93f25b0a25b037ff565dc6fe62ea410d1cee46c1ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714072834823453769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95,PodSandboxId:51b56702f2b0bc5e3e0d647c3647512e86f57eb259b19391739deb2056df9d20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714072834771018602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=72b8b529-58f8-4519-af69-2a01e9e3592d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.292120219Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82421067-e7fe-4681-adba-74d90374240e name=/runtime.v1.RuntimeService/Version
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.292219718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82421067-e7fe-4681-adba-74d90374240e name=/runtime.v1.RuntimeService/Version
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.293716822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47adf4bd-eb50-45e7-872c-3d3c59b97d00 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.294112462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714073308294089035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47adf4bd-eb50-45e7-872c-3d3c59b97d00 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.294762670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9c24827-4600-46af-a545-63859056775e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.294845649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9c24827-4600-46af-a545-63859056775e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.295181400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36b5ce47353cc0c96dc8b5e9a33afd7fb38b5fbeabb96502d852b56825a6cb3d,PodSandboxId:dd50b400fdc3b7ad73cd4c60d7cd079e8fd50dea262c8891ed8fd5c3f1024876,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714073263074706219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cf38fb5a61ccd685210f49586b1838e03e6a8f24e5dd7f90b212b82b98e2f7,PodSandboxId:e6803b765d61e1ba7f49b06b2427aff8568e5224b90549e5b0cb6cadf7a8db44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714073229612261901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d831b8602d86212ca0ef86f76b2a589993ec24594da3f7e115dd7e98fe29fb7,PodSandboxId:48c24d8fe93ade998923e73bcfadaa746d58973624039eb5a4f47fb4c33dbcab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714073229471756466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e36a3005313fbfb55de3fd1442a04ed10cb79094f1a654508afe8d0485ba41,PodSandboxId:0801a5c3e1bad99484d6fc95a29ce72432691a446dbfa2192853f084911be965,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714073229429973720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},An
notations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0f0bec0bfe8c519abee73f140e20eaa46e228490f690075a8a9ea1d0832b2e,PodSandboxId:92d31c8b03469b443faa23fa66606c74548c2f21fc9bf70f79d1a9cb7048c9bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714073229337173054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30-4abec6ea5602,},Annotations:map[string]string{io.ku
bernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a74a0d18b4fffdefa91ae8f49e5820ac816df60ff4fa7be1478ec7b6c96adac,PodSandboxId:83ea89064e01f551e77ddd34e4a5bfa50de0b78bf65e908b300f50ad1ee8f212,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714073224543468174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd00ac37f2b0620a23ff4777f7b8c8f3bd22582d2b0680cc199db51c8d2a2ebe,PodSandboxId:558bff1fcd05bf116d3f65453c9b929ce7b50127cdc80b192983ce8c83f3f9d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714073224505108779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io.kub
ernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fc7eb8c6052e494d92460d7cc331abd9063f24fc7baece9b50aa5349942750,PodSandboxId:9b32c84c32b51e23ce062771f8a8029b513f58e6bb4f05d192ae7c1b198888a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714073224499323025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1ec686ad9a7a298de9bc3ed357f9487b2134662a8b2377fc4b4c63d25701f2,PodSandboxId:814a5b64618a0f32875cce5e29b1a6c717454aa0f3051bcdffb2f27aa2f64d42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714073224389365563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.container.hash: c16a365d,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa56fafcb469dc3396bc085681aa0b6058fdca2f63fda74d7ce625ee56d7b228,PodSandboxId:26e22cc5d185c36ad51b61d52b3a92341a5345bb64ca7086bd1e36f3ca3a65a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714072906141442748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1,PodSandboxId:42269422724f8c49b8316f09efb7256089d87cf294a3d91a8fec646997201ec0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714072857917492767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},Annotations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4,PodSandboxId:c02b4009f3b7819c289e15a9b9634da93780d6f19507bcaa63086c422bf3d779,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714072856649372910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653,PodSandboxId:b7b0f9cfcb0ca0e91e6f3fdddcc5294f2c8623b3fdf7e8f469bc6527ac80ba1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714072854916856256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53,PodSandboxId:88c72015e2cbcb088b9378b418e2b4a8271565f15dbd9426bfcb9c699aed8474,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714072854684172654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30
-4abec6ea5602,},Annotations:map[string]string{io.kubernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd,PodSandboxId:5c59a402a7f40ed8e0574c71e5b2687615ed5b1f218712a1a6e052fa14cc6169,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714072834905751142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a,PodSandboxId:22aa80497e4ffc633cd6fc08d1710f481bcb900a5cac34f13ccce495c06874c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714072834870358706,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.
container.hash: c16a365d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e,PodSandboxId:6c5aba426835e831b909ab93f25b0a25b037ff565dc6fe62ea410d1cee46c1ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714072834823453769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95,PodSandboxId:51b56702f2b0bc5e3e0d647c3647512e86f57eb259b19391739deb2056df9d20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714072834771018602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9c24827-4600-46af-a545-63859056775e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.342556714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2d54f58-9c32-434a-93ba-a193d5584fb6 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.342704283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2d54f58-9c32-434a-93ba-a193d5584fb6 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.344779632Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d0d64de-8c32-4bb0-93e8-417604496bcc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.345181353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714073308345159911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d0d64de-8c32-4bb0-93e8-417604496bcc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.346099919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d779cf9a-d47f-4b7e-8528-a1699caeab46 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.346189022Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d779cf9a-d47f-4b7e-8528-a1699caeab46 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:28:28 multinode-857482 crio[2844]: time="2024-04-25 19:28:28.346758179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36b5ce47353cc0c96dc8b5e9a33afd7fb38b5fbeabb96502d852b56825a6cb3d,PodSandboxId:dd50b400fdc3b7ad73cd4c60d7cd079e8fd50dea262c8891ed8fd5c3f1024876,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714073263074706219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cf38fb5a61ccd685210f49586b1838e03e6a8f24e5dd7f90b212b82b98e2f7,PodSandboxId:e6803b765d61e1ba7f49b06b2427aff8568e5224b90549e5b0cb6cadf7a8db44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714073229612261901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d831b8602d86212ca0ef86f76b2a589993ec24594da3f7e115dd7e98fe29fb7,PodSandboxId:48c24d8fe93ade998923e73bcfadaa746d58973624039eb5a4f47fb4c33dbcab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714073229471756466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e36a3005313fbfb55de3fd1442a04ed10cb79094f1a654508afe8d0485ba41,PodSandboxId:0801a5c3e1bad99484d6fc95a29ce72432691a446dbfa2192853f084911be965,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714073229429973720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},An
notations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0f0bec0bfe8c519abee73f140e20eaa46e228490f690075a8a9ea1d0832b2e,PodSandboxId:92d31c8b03469b443faa23fa66606c74548c2f21fc9bf70f79d1a9cb7048c9bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714073229337173054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30-4abec6ea5602,},Annotations:map[string]string{io.ku
bernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a74a0d18b4fffdefa91ae8f49e5820ac816df60ff4fa7be1478ec7b6c96adac,PodSandboxId:83ea89064e01f551e77ddd34e4a5bfa50de0b78bf65e908b300f50ad1ee8f212,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714073224543468174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd00ac37f2b0620a23ff4777f7b8c8f3bd22582d2b0680cc199db51c8d2a2ebe,PodSandboxId:558bff1fcd05bf116d3f65453c9b929ce7b50127cdc80b192983ce8c83f3f9d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714073224505108779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io.kub
ernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fc7eb8c6052e494d92460d7cc331abd9063f24fc7baece9b50aa5349942750,PodSandboxId:9b32c84c32b51e23ce062771f8a8029b513f58e6bb4f05d192ae7c1b198888a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714073224499323025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1ec686ad9a7a298de9bc3ed357f9487b2134662a8b2377fc4b4c63d25701f2,PodSandboxId:814a5b64618a0f32875cce5e29b1a6c717454aa0f3051bcdffb2f27aa2f64d42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714073224389365563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.container.hash: c16a365d,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa56fafcb469dc3396bc085681aa0b6058fdca2f63fda74d7ce625ee56d7b228,PodSandboxId:26e22cc5d185c36ad51b61d52b3a92341a5345bb64ca7086bd1e36f3ca3a65a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714072906141442748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1,PodSandboxId:42269422724f8c49b8316f09efb7256089d87cf294a3d91a8fec646997201ec0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714072857917492767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},Annotations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4,PodSandboxId:c02b4009f3b7819c289e15a9b9634da93780d6f19507bcaa63086c422bf3d779,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714072856649372910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653,PodSandboxId:b7b0f9cfcb0ca0e91e6f3fdddcc5294f2c8623b3fdf7e8f469bc6527ac80ba1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714072854916856256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53,PodSandboxId:88c72015e2cbcb088b9378b418e2b4a8271565f15dbd9426bfcb9c699aed8474,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714072854684172654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30
-4abec6ea5602,},Annotations:map[string]string{io.kubernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd,PodSandboxId:5c59a402a7f40ed8e0574c71e5b2687615ed5b1f218712a1a6e052fa14cc6169,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714072834905751142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a,PodSandboxId:22aa80497e4ffc633cd6fc08d1710f481bcb900a5cac34f13ccce495c06874c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714072834870358706,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.
container.hash: c16a365d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e,PodSandboxId:6c5aba426835e831b909ab93f25b0a25b037ff565dc6fe62ea410d1cee46c1ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714072834823453769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95,PodSandboxId:51b56702f2b0bc5e3e0d647c3647512e86f57eb259b19391739deb2056df9d20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714072834771018602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d779cf9a-d47f-4b7e-8528-a1699caeab46 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	36b5ce47353cc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      45 seconds ago       Running             busybox                   1                   dd50b400fdc3b       busybox-fc5497c4f-5nvcd
	57cf38fb5a61c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   e6803b765d61e       kindnet-cslck
	0d831b8602d86       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   48c24d8fe93ad       coredns-7db6d8ff4d-jpgn9
	53e36a3005313       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   0801a5c3e1bad       storage-provisioner
	7e0f0bec0bfe8       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   92d31c8b03469       kube-proxy-r749w
	8a74a0d18b4ff       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   83ea89064e01f       kube-scheduler-multinode-857482
	dd00ac37f2b06       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   1                   558bff1fcd05b       kube-controller-manager-multinode-857482
	b4fc7eb8c6052       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            1                   9b32c84c32b51       kube-apiserver-multinode-857482
	6b1ec686ad9a7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   814a5b64618a0       etcd-multinode-857482
	aa56fafcb469d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   26e22cc5d185c       busybox-fc5497c4f-5nvcd
	90f63f2641dae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   42269422724f8       storage-provisioner
	45abb60926ed3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   c02b4009f3b78       coredns-7db6d8ff4d-jpgn9
	e5e85ab7416e7       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   b7b0f9cfcb0ca       kindnet-cslck
	ef8755e344e04       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago        Exited              kube-proxy                0                   88c72015e2cbc       kube-proxy-r749w
	a2e02984ebc2f       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago        Exited              kube-scheduler            0                   5c59a402a7f40       kube-scheduler-multinode-857482
	50d52d4bddff3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   22aa80497e4ff       etcd-multinode-857482
	843f769af6424       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago        Exited              kube-controller-manager   0                   6c5aba426835e       kube-controller-manager-multinode-857482
	374c5041b0427       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago        Exited              kube-apiserver            0                   51b56702f2b0b       kube-apiserver-multinode-857482
	
	
	==> coredns [0d831b8602d86212ca0ef86f76b2a589993ec24594da3f7e115dd7e98fe29fb7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47317 - 48826 "HINFO IN 8766005205731033561.6680086199325924933. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021028555s
	
	
	==> coredns [45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4] <==
	[INFO] 10.244.1.2:41791 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002394988s
	[INFO] 10.244.1.2:44861 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125514s
	[INFO] 10.244.1.2:38916 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100885s
	[INFO] 10.244.1.2:59067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001196938s
	[INFO] 10.244.1.2:44526 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245468s
	[INFO] 10.244.1.2:41212 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078702s
	[INFO] 10.244.1.2:48858 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077356s
	[INFO] 10.244.0.3:41411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118264s
	[INFO] 10.244.0.3:58901 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079488s
	[INFO] 10.244.0.3:39547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074073s
	[INFO] 10.244.0.3:33466 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151126s
	[INFO] 10.244.1.2:39324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201953s
	[INFO] 10.244.1.2:48448 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115632s
	[INFO] 10.244.1.2:42885 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108855s
	[INFO] 10.244.1.2:55393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101583s
	[INFO] 10.244.0.3:49668 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125129s
	[INFO] 10.244.0.3:58718 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150779s
	[INFO] 10.244.0.3:59358 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138018s
	[INFO] 10.244.0.3:55736 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094508s
	[INFO] 10.244.1.2:37993 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000253074s
	[INFO] 10.244.1.2:34336 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000144026s
	[INFO] 10.244.1.2:57786 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138683s
	[INFO] 10.244.1.2:55015 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00015061s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-857482
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-857482
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=multinode-857482
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T19_20_41_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:20:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-857482
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:28:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:27:07 +0000   Thu, 25 Apr 2024 19:20:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:27:07 +0000   Thu, 25 Apr 2024 19:20:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:27:07 +0000   Thu, 25 Apr 2024 19:20:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:27:07 +0000   Thu, 25 Apr 2024 19:20:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    multinode-857482
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0def079f20434cd5bbdfc4247f5577c0
	  System UUID:                0def079f-2043-4cd5-bbdf-c4247f5577c0
	  Boot ID:                    833f3010-465e-47f2-b2dd-9ef743d0be86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5nvcd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	  kube-system                 coredns-7db6d8ff4d-jpgn9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m34s
	  kube-system                 etcd-multinode-857482                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m48s
	  kube-system                 kindnet-cslck                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m35s
	  kube-system                 kube-apiserver-multinode-857482             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m48s
	  kube-system                 kube-controller-manager-multinode-857482    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m48s
	  kube-system                 kube-proxy-r749w                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-scheduler-multinode-857482             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m33s              kube-proxy       
	  Normal  Starting                 78s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m48s              kubelet          Node multinode-857482 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m48s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m48s              kubelet          Node multinode-857482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m48s              kubelet          Node multinode-857482 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m48s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m35s              node-controller  Node multinode-857482 event: Registered Node multinode-857482 in Controller
	  Normal  NodeReady                7m32s              kubelet          Node multinode-857482 status is now: NodeReady
	  Normal  Starting                 85s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  85s (x8 over 85s)  kubelet          Node multinode-857482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x8 over 85s)  kubelet          Node multinode-857482 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x7 over 85s)  kubelet          Node multinode-857482 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  85s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                node-controller  Node multinode-857482 event: Registered Node multinode-857482 in Controller
	
	
	Name:               multinode-857482-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-857482-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=multinode-857482
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T19_27_45_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:27:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-857482-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:28:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:28:16 +0000   Thu, 25 Apr 2024 19:27:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:28:16 +0000   Thu, 25 Apr 2024 19:27:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:28:16 +0000   Thu, 25 Apr 2024 19:27:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:28:16 +0000   Thu, 25 Apr 2024 19:27:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    multinode-857482-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 add9791c83e34e71b7a9b00dc5ab31c1
	  System UUID:                add9791c-83e3-4e71-b7a9-b00dc5ab31c1
	  Boot ID:                    f79d32d3-488c-483e-8368-611ad5060b99
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j5v9r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-hqr9m              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m58s
	  kube-system                 kube-proxy-b9xv5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m53s                  kube-proxy  
	  Normal  Starting                 39s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m58s (x2 over 6m58s)  kubelet     Node multinode-857482-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m58s (x2 over 6m58s)  kubelet     Node multinode-857482-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m58s (x2 over 6m58s)  kubelet     Node multinode-857482-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m48s                  kubelet     Node multinode-857482-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  43s (x2 over 44s)      kubelet     Node multinode-857482-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x2 over 44s)      kubelet     Node multinode-857482-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x2 over 44s)      kubelet     Node multinode-857482-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                34s                    kubelet     Node multinode-857482-m02 status is now: NodeReady
	
	
	Name:               multinode-857482-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-857482-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=multinode-857482
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T19_28_16_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:28:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-857482-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:28:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:28:25 +0000   Thu, 25 Apr 2024 19:28:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:28:25 +0000   Thu, 25 Apr 2024 19:28:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:28:25 +0000   Thu, 25 Apr 2024 19:28:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:28:25 +0000   Thu, 25 Apr 2024 19:28:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    multinode-857482-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d47396bbc464458cb70f65649530a438
	  System UUID:                d47396bb-c464-458c-b70f-65649530a438
	  Boot ID:                    780f26aa-6722-457e-a434-9432d84747d3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-z7chs       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m9s
	  kube-system                 kube-proxy-w9c48    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m4s                   kube-proxy  
	  Normal  Starting                 8s                     kube-proxy  
	  Normal  Starting                 5m21s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m9s (x2 over 6m10s)   kubelet     Node multinode-857482-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x2 over 6m10s)   kubelet     Node multinode-857482-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x2 over 6m10s)   kubelet     Node multinode-857482-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m9s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m59s                  kubelet     Node multinode-857482-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m25s (x2 over 5m25s)  kubelet     Node multinode-857482-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m25s (x2 over 5m25s)  kubelet     Node multinode-857482-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m25s (x2 over 5m25s)  kubelet     Node multinode-857482-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m16s                  kubelet     Node multinode-857482-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet     Node multinode-857482-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet     Node multinode-857482-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet     Node multinode-857482-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-857482-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.069962] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.196750] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.136392] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.274846] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.761080] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.059696] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.903774] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +1.103525] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.473757] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.089108] kauditd_printk_skb: 25 callbacks suppressed
	[ +14.241934] systemd-fstab-generator[1507]: Ignoring "noauto" option for root device
	[  +0.028551] kauditd_printk_skb: 21 callbacks suppressed
	[Apr25 19:21] kauditd_printk_skb: 84 callbacks suppressed
	[Apr25 19:26] systemd-fstab-generator[2762]: Ignoring "noauto" option for root device
	[  +0.167255] systemd-fstab-generator[2774]: Ignoring "noauto" option for root device
	[  +0.189315] systemd-fstab-generator[2788]: Ignoring "noauto" option for root device
	[  +0.160235] systemd-fstab-generator[2800]: Ignoring "noauto" option for root device
	[  +0.310138] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[Apr25 19:27] systemd-fstab-generator[2928]: Ignoring "noauto" option for root device
	[  +0.084454] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.783396] systemd-fstab-generator[3054]: Ignoring "noauto" option for root device
	[  +5.788981] kauditd_printk_skb: 74 callbacks suppressed
	[  +9.953591] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
	[  +0.112702] kauditd_printk_skb: 32 callbacks suppressed
	[ +23.697165] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a] <==
	{"level":"info","ts":"2024-04-25T19:20:36.062927Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.194:2379"}
	{"level":"info","ts":"2024-04-25T19:20:36.069689Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T19:20:36.073783Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T19:20:36.113745Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:20:36.114109Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:20:36.114239Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-04-25T19:21:30.776938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.167009ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5517348346174703230 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-857482-m02.17c99c365cbc3dac\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-857482-m02.17c99c365cbc3dac\" value_size:642 lease:5517348346174702641 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-25T19:21:30.777296Z","caller":"traceutil/trace.go:171","msg":"trace[1124323218] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"253.232436ms","start":"2024-04-25T19:21:30.52405Z","end":"2024-04-25T19:21:30.777282Z","steps":["trace[1124323218] 'process raft request'  (duration: 69.090166ms)","trace[1124323218] 'compare'  (duration: 183.076233ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-25T19:21:30.777434Z","caller":"traceutil/trace.go:171","msg":"trace[1314848433] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"202.218678ms","start":"2024-04-25T19:21:30.575201Z","end":"2024-04-25T19:21:30.77742Z","steps":["trace[1314848433] 'process raft request'  (duration: 201.832445ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T19:22:19.211306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.410465ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5517348346174703623 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-857482-m03.17c99c41a4841b5a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-857482-m03.17c99c41a4841b5a\" value_size:640 lease:5517348346174703348 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-25T19:22:19.211689Z","caller":"traceutil/trace.go:171","msg":"trace[1989869227] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"230.250708ms","start":"2024-04-25T19:22:18.981339Z","end":"2024-04-25T19:22:19.21159Z","steps":["trace[1989869227] 'process raft request'  (duration: 64.395462ms)","trace[1989869227] 'compare'  (duration: 165.114085ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-25T19:22:19.211914Z","caller":"traceutil/trace.go:171","msg":"trace[1026513346] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"180.961245ms","start":"2024-04-25T19:22:19.030944Z","end":"2024-04-25T19:22:19.211905Z","steps":["trace[1026513346] 'process raft request'  (duration: 180.552677ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T19:22:23.35508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.022103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-25T19:22:23.355203Z","caller":"traceutil/trace.go:171","msg":"trace[2040449417] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:617; }","duration":"138.182198ms","start":"2024-04-25T19:22:23.217006Z","end":"2024-04-25T19:22:23.355188Z","steps":["trace[2040449417] 'count revisions from in-memory index tree'  (duration: 137.972707ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T19:22:23.355255Z","caller":"traceutil/trace.go:171","msg":"trace[911213496] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"119.396753ms","start":"2024-04-25T19:22:23.235846Z","end":"2024-04-25T19:22:23.355242Z","steps":["trace[911213496] 'process raft request'  (duration: 119.071977ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T19:25:19.352824Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-25T19:25:19.353001Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-857482","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"]}
	{"level":"warn","ts":"2024-04-25T19:25:19.353124Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.194:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-25T19:25:19.353158Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.194:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-25T19:25:19.353266Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-25T19:25:19.353324Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-25T19:25:19.424008Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b4bd7d4638784c91","current-leader-member-id":"b4bd7d4638784c91"}
	{"level":"info","ts":"2024-04-25T19:25:19.427098Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-25T19:25:19.4274Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-25T19:25:19.427453Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-857482","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"]}
	
	
	==> etcd [6b1ec686ad9a7a298de9bc3ed357f9487b2134662a8b2377fc4b4c63d25701f2] <==
	{"level":"info","ts":"2024-04-25T19:27:04.881363Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-25T19:27:04.886731Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-25T19:27:04.89298Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-25T19:27:04.893202Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b4bd7d4638784c91","initial-advertise-peer-urls":["https://192.168.39.194:2380"],"listen-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.194:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-25T19:27:04.893258Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-25T19:27:04.899758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 switched to configuration voters=(13023703437973933201)"}
	{"level":"info","ts":"2024-04-25T19:27:04.900122Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","added-peer-id":"b4bd7d4638784c91","added-peer-peer-urls":["https://192.168.39.194:2380"]}
	{"level":"info","ts":"2024-04-25T19:27:04.903277Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:27:04.903512Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:27:04.900748Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-25T19:27:04.905799Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-25T19:27:06.353356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-25T19:27:06.353431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-25T19:27:06.353484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgPreVoteResp from b4bd7d4638784c91 at term 2"}
	{"level":"info","ts":"2024-04-25T19:27:06.353499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became candidate at term 3"}
	{"level":"info","ts":"2024-04-25T19:27:06.353505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgVoteResp from b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2024-04-25T19:27:06.353512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became leader at term 3"}
	{"level":"info","ts":"2024-04-25T19:27:06.353523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b4bd7d4638784c91 elected leader b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2024-04-25T19:27:06.360181Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b4bd7d4638784c91","local-member-attributes":"{Name:multinode-857482 ClientURLs:[https://192.168.39.194:2379]}","request-path":"/0/members/b4bd7d4638784c91/attributes","cluster-id":"bb2ce3d66f8fb721","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T19:27:06.360397Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T19:27:06.36044Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T19:27:06.360389Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:27:06.360411Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:27:06.36259Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.194:2379"}
	{"level":"info","ts":"2024-04-25T19:27:06.363499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:28:28 up 8 min,  0 users,  load average: 0.06, 0.18, 0.13
	Linux multinode-857482 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [57cf38fb5a61ccd685210f49586b1838e03e6a8f24e5dd7f90b212b82b98e2f7] <==
	I0425 19:27:40.627423       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:27:50.632333       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:27:50.632380       1 main.go:227] handling current node
	I0425 19:27:50.632392       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:27:50.632398       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:27:50.632497       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:27:50.632502       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:28:00.644947       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:28:00.645002       1 main.go:227] handling current node
	I0425 19:28:00.645016       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:28:00.645025       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:28:00.645141       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:28:00.645184       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:28:10.673533       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:28:10.673830       1 main.go:227] handling current node
	I0425 19:28:10.673884       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:28:10.673906       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:28:10.674064       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:28:10.674084       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:28:20.681932       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:28:20.681994       1 main.go:227] handling current node
	I0425 19:28:20.682008       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:28:20.682014       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:28:20.682168       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:28:20.682208       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653] <==
	I0425 19:24:36.121503       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:24:46.135486       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:24:46.135531       1 main.go:227] handling current node
	I0425 19:24:46.135542       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:24:46.135548       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:24:46.135708       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:24:46.135715       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:24:56.140873       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:24:56.140934       1 main.go:227] handling current node
	I0425 19:24:56.140944       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:24:56.140950       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:24:56.141052       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:24:56.141087       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:25:06.153906       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:25:06.154009       1 main.go:227] handling current node
	I0425 19:25:06.154033       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:25:06.154052       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:25:06.154162       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:25:06.154181       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:25:16.160448       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:25:16.160553       1 main.go:227] handling current node
	I0425 19:25:16.160587       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:25:16.160612       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:25:16.160969       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:25:16.161058       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95] <==
	I0425 19:20:39.473601       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0425 19:20:39.516908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0425 19:20:39.667342       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0425 19:20:39.675103       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.194]
	I0425 19:20:39.676102       1 controller.go:615] quota admission added evaluator for: endpoints
	I0425 19:20:39.683808       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0425 19:20:39.877012       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0425 19:20:40.501454       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0425 19:20:40.534106       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0425 19:20:40.556902       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0425 19:20:53.914157       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0425 19:20:53.963255       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0425 19:21:47.639266       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38376: use of closed network connection
	E0425 19:21:47.865469       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38404: use of closed network connection
	E0425 19:21:48.060936       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38422: use of closed network connection
	E0425 19:21:48.238495       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38434: use of closed network connection
	E0425 19:21:48.424092       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38448: use of closed network connection
	E0425 19:21:48.717517       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38476: use of closed network connection
	E0425 19:21:48.901238       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38488: use of closed network connection
	E0425 19:21:49.090264       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38514: use of closed network connection
	E0425 19:21:49.274224       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38534: use of closed network connection
	I0425 19:25:19.349794       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0425 19:25:19.374509       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0425 19:25:19.374587       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0425 19:25:19.381906       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [b4fc7eb8c6052e494d92460d7cc331abd9063f24fc7baece9b50aa5349942750] <==
	I0425 19:27:07.779189       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0425 19:27:07.779337       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0425 19:27:07.779398       1 shared_informer.go:320] Caches are synced for configmaps
	I0425 19:27:07.779470       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0425 19:27:07.779611       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0425 19:27:07.791996       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0425 19:27:07.793601       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0425 19:27:07.798326       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0425 19:27:07.798369       1 policy_source.go:224] refreshing policies
	I0425 19:27:07.798435       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0425 19:27:07.803368       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0425 19:27:07.808986       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0425 19:27:07.810065       1 aggregator.go:165] initial CRD sync complete...
	I0425 19:27:07.810114       1 autoregister_controller.go:141] Starting autoregister controller
	I0425 19:27:07.810122       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0425 19:27:07.810127       1 cache.go:39] Caches are synced for autoregister controller
	E0425 19:27:07.828902       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0425 19:27:08.689934       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0425 19:27:10.357159       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0425 19:27:10.514033       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0425 19:27:10.525216       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0425 19:27:10.599596       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0425 19:27:10.606335       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0425 19:27:20.538443       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0425 19:27:20.675368       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e] <==
	I0425 19:21:30.781282       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-857482-m02\" does not exist"
	I0425 19:21:30.807797       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-857482-m02" podCIDRs=["10.244.1.0/24"]
	I0425 19:21:33.126105       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-857482-m02"
	I0425 19:21:40.670468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:21:43.066368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.633057ms"
	I0425 19:21:43.107314       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.878822ms"
	I0425 19:21:43.107750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="257.083µs"
	I0425 19:21:43.108469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.778µs"
	I0425 19:21:46.606806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.78182ms"
	I0425 19:21:46.606927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.918µs"
	I0425 19:21:46.848153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.967758ms"
	I0425 19:21:46.849545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="216.224µs"
	I0425 19:22:19.217149       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:22:19.217405       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-857482-m03\" does not exist"
	I0425 19:22:19.241720       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-857482-m03" podCIDRs=["10.244.2.0/24"]
	I0425 19:22:23.146959       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-857482-m03"
	I0425 19:22:29.510793       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:23:01.928482       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:23:03.139409       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-857482-m03\" does not exist"
	I0425 19:23:03.141972       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:23:03.157988       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-857482-m03" podCIDRs=["10.244.3.0/24"]
	I0425 19:23:12.606492       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:23:58.217169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:23:58.264701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.798909ms"
	I0425 19:23:58.264952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.063µs"
	
	
	==> kube-controller-manager [dd00ac37f2b0620a23ff4777f7b8c8f3bd22582d2b0680cc199db51c8d2a2ebe] <==
	I0425 19:27:21.099407       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0425 19:27:21.133104       1 shared_informer.go:320] Caches are synced for garbage collector
	I0425 19:27:40.803826       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.333438ms"
	I0425 19:27:40.815745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.643527ms"
	I0425 19:27:40.816068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.611µs"
	I0425 19:27:40.827977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.062µs"
	I0425 19:27:45.052971       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-857482-m02\" does not exist"
	I0425 19:27:45.064375       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-857482-m02" podCIDRs=["10.244.1.0/24"]
	I0425 19:27:46.957167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.396µs"
	I0425 19:27:46.988130       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.192µs"
	I0425 19:27:47.003159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.654µs"
	I0425 19:27:47.014015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.566µs"
	I0425 19:27:47.020067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.523µs"
	I0425 19:27:47.022869       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.879µs"
	I0425 19:27:51.627574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.481µs"
	I0425 19:27:54.704324       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:27:54.723098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.84µs"
	I0425 19:27:54.737897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.585µs"
	I0425 19:27:58.326003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.221672ms"
	I0425 19:27:58.326090       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.407µs"
	I0425 19:28:14.467559       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:28:15.568283       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:28:15.568420       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-857482-m03\" does not exist"
	I0425 19:28:15.590013       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-857482-m03" podCIDRs=["10.244.2.0/24"]
	I0425 19:28:25.178693       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	
	
	==> kube-proxy [7e0f0bec0bfe8c519abee73f140e20eaa46e228490f690075a8a9ea1d0832b2e] <==
	I0425 19:27:09.654861       1 server_linux.go:69] "Using iptables proxy"
	I0425 19:27:09.669714       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.194"]
	I0425 19:27:09.901216       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:27:09.901289       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:27:09.901309       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:27:09.930816       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:27:09.931036       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:27:09.931088       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:27:09.938177       1 config.go:192] "Starting service config controller"
	I0425 19:27:09.938214       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:27:09.938239       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:27:09.938243       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:27:09.938711       1 config.go:319] "Starting node config controller"
	I0425 19:27:09.938721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:27:10.039289       1 shared_informer.go:320] Caches are synced for node config
	I0425 19:27:10.039892       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:27:10.041735       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53] <==
	I0425 19:20:54.929221       1 server_linux.go:69] "Using iptables proxy"
	I0425 19:20:54.940773       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.194"]
	I0425 19:20:55.057935       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:20:55.058006       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:20:55.058025       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:20:55.066168       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:20:55.066428       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:20:55.066440       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:20:55.086570       1 config.go:192] "Starting service config controller"
	I0425 19:20:55.087090       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:20:55.087539       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:20:55.087573       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:20:55.099466       1 config.go:319] "Starting node config controller"
	I0425 19:20:55.099505       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:20:55.188718       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:20:55.188789       1 shared_informer.go:320] Caches are synced for service config
	I0425 19:20:55.199553       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8a74a0d18b4fffdefa91ae8f49e5820ac816df60ff4fa7be1478ec7b6c96adac] <==
	I0425 19:27:05.886610       1 serving.go:380] Generated self-signed cert in-memory
	W0425 19:27:07.725222       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0425 19:27:07.725343       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 19:27:07.725379       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0425 19:27:07.725461       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0425 19:27:07.814515       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0425 19:27:07.814571       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:27:07.819158       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0425 19:27:07.819390       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0425 19:27:07.819401       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0425 19:27:07.819414       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0425 19:27:07.920366       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd] <==
	E0425 19:20:38.707064       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 19:20:38.756311       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 19:20:38.756368       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 19:20:38.797533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:20:38.797589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0425 19:20:39.004314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0425 19:20:39.004375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0425 19:20:39.052087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 19:20:39.052883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0425 19:20:39.099845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 19:20:39.099899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0425 19:20:39.125196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 19:20:39.125257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 19:20:39.183358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 19:20:39.183418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 19:20:39.221806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 19:20:39.221943       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 19:20:39.233258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 19:20:39.233286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0425 19:20:39.239718       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 19:20:39.240834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 19:20:39.255453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0425 19:20:39.255797       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0425 19:20:41.070246       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0425 19:25:19.351292       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 25 19:27:04 multinode-857482 kubelet[3061]: E0425 19:27:04.865051    3061 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.194:8443: connect: connection refused
	Apr 25 19:27:05 multinode-857482 kubelet[3061]: I0425 19:27:05.237963    3061 kubelet_node_status.go:73] "Attempting to register node" node="multinode-857482"
	Apr 25 19:27:07 multinode-857482 kubelet[3061]: I0425 19:27:07.887608    3061 kubelet_node_status.go:112] "Node was previously registered" node="multinode-857482"
	Apr 25 19:27:07 multinode-857482 kubelet[3061]: I0425 19:27:07.888095    3061 kubelet_node_status.go:76] "Successfully registered node" node="multinode-857482"
	Apr 25 19:27:07 multinode-857482 kubelet[3061]: I0425 19:27:07.889836    3061 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 25 19:27:07 multinode-857482 kubelet[3061]: I0425 19:27:07.890959    3061 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: E0425 19:27:08.496233    3061 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-857482\" already exists" pod="kube-system/kube-apiserver-multinode-857482"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.702727    3061 apiserver.go:52] "Watching apiserver"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.707531    3061 topology_manager.go:215] "Topology Admit Handler" podUID="6dda0d17-6ae1-40ac-9ed3-a272478b00e9" podNamespace="kube-system" podName="kindnet-cslck"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.707767    3061 topology_manager.go:215] "Topology Admit Handler" podUID="88201317-c03e-4b73-9d30-4abec6ea5602" podNamespace="kube-system" podName="kube-proxy-r749w"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.707876    3061 topology_manager.go:215] "Topology Admit Handler" podUID="6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jpgn9"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.707926    3061 topology_manager.go:215] "Topology Admit Handler" podUID="3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a" podNamespace="kube-system" podName="storage-provisioner"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.707981    3061 topology_manager.go:215] "Topology Admit Handler" podUID="bfab8c51-36de-44d5-859a-efe4f72047e7" podNamespace="default" podName="busybox-fc5497c4f-5nvcd"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.727885    3061 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.813379    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88201317-c03e-4b73-9d30-4abec6ea5602-lib-modules\") pod \"kube-proxy-r749w\" (UID: \"88201317-c03e-4b73-9d30-4abec6ea5602\") " pod="kube-system/kube-proxy-r749w"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.813606    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88201317-c03e-4b73-9d30-4abec6ea5602-xtables-lock\") pod \"kube-proxy-r749w\" (UID: \"88201317-c03e-4b73-9d30-4abec6ea5602\") " pod="kube-system/kube-proxy-r749w"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.814523    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6dda0d17-6ae1-40ac-9ed3-a272478b00e9-cni-cfg\") pod \"kindnet-cslck\" (UID: \"6dda0d17-6ae1-40ac-9ed3-a272478b00e9\") " pod="kube-system/kindnet-cslck"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.814753    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dda0d17-6ae1-40ac-9ed3-a272478b00e9-xtables-lock\") pod \"kindnet-cslck\" (UID: \"6dda0d17-6ae1-40ac-9ed3-a272478b00e9\") " pod="kube-system/kindnet-cslck"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.814884    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a-tmp\") pod \"storage-provisioner\" (UID: \"3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a\") " pod="kube-system/storage-provisioner"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.815791    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dda0d17-6ae1-40ac-9ed3-a272478b00e9-lib-modules\") pod \"kindnet-cslck\" (UID: \"6dda0d17-6ae1-40ac-9ed3-a272478b00e9\") " pod="kube-system/kindnet-cslck"
	Apr 25 19:28:03 multinode-857482 kubelet[3061]: E0425 19:28:03.804538    3061 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:28:03 multinode-857482 kubelet[3061]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:28:03 multinode-857482 kubelet[3061]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:28:03 multinode-857482 kubelet[3061]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:28:03 multinode-857482 kubelet[3061]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:28:27.869950   44234 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18757-6355/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-857482 -n multinode-857482
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-857482 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (314.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 stop
E0425 19:28:36.328463   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-857482 stop: exit status 82 (2m0.48393672s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-857482-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-857482 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 status
E0425 19:30:45.438688   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-857482 status: exit status 3 (18.712806421s)

                                                
                                                
-- stdout --
	multinode-857482
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-857482-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:30:51.338532   44893 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host
	E0425 19:30:51.338567   44893 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.172:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-857482 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-857482 -n multinode-857482
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-857482 logs -n 25: (1.645008099s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp multinode-857482-m02:/home/docker/cp-test.txt                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482:/home/docker/cp-test_multinode-857482-m02_multinode-857482.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n multinode-857482 sudo cat                                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-857482-m02_multinode-857482.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp multinode-857482-m02:/home/docker/cp-test.txt                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03:/home/docker/cp-test_multinode-857482-m02_multinode-857482-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n multinode-857482-m03 sudo cat                                   | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-857482-m02_multinode-857482-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp testdata/cp-test.txt                                                | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp multinode-857482-m03:/home/docker/cp-test.txt                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile932174876/001/cp-test_multinode-857482-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp multinode-857482-m03:/home/docker/cp-test.txt                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482:/home/docker/cp-test_multinode-857482-m03_multinode-857482.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n multinode-857482 sudo cat                                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-857482-m03_multinode-857482.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-857482 cp multinode-857482-m03:/home/docker/cp-test.txt                       | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m02:/home/docker/cp-test_multinode-857482-m03_multinode-857482-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n                                                                 | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | multinode-857482-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-857482 ssh -n multinode-857482-m02 sudo cat                                   | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	|         | /home/docker/cp-test_multinode-857482-m03_multinode-857482-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-857482 node stop m03                                                          | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:22 UTC |
	| node    | multinode-857482 node start                                                             | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:22 UTC | 25 Apr 24 19:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-857482                                                                | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:23 UTC |                     |
	| stop    | -p multinode-857482                                                                     | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:23 UTC |                     |
	| start   | -p multinode-857482                                                                     | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:25 UTC | 25 Apr 24 19:28 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-857482                                                                | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:28 UTC |                     |
	| node    | multinode-857482 node delete                                                            | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:28 UTC | 25 Apr 24 19:28 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-857482 stop                                                                   | multinode-857482 | jenkins | v1.33.0 | 25 Apr 24 19:28 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:25:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:25:18.314680   43102 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:25:18.314779   43102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:25:18.314790   43102 out.go:304] Setting ErrFile to fd 2...
	I0425 19:25:18.314794   43102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:25:18.314989   43102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:25:18.315535   43102 out.go:298] Setting JSON to false
	I0425 19:25:18.316463   43102 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4064,"bootTime":1714069054,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:25:18.316521   43102 start.go:139] virtualization: kvm guest
	I0425 19:25:18.319152   43102 out.go:177] * [multinode-857482] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:25:18.320813   43102 notify.go:220] Checking for updates...
	I0425 19:25:18.320825   43102 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:25:18.322184   43102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:25:18.323633   43102 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:25:18.324898   43102 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:25:18.326091   43102 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:25:18.327311   43102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:25:18.328829   43102 config.go:182] Loaded profile config "multinode-857482": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:25:18.328939   43102 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:25:18.329348   43102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:25:18.329395   43102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:25:18.345099   43102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0425 19:25:18.345486   43102 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:25:18.345978   43102 main.go:141] libmachine: Using API Version  1
	I0425 19:25:18.345999   43102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:25:18.346323   43102 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:25:18.346523   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:25:18.380864   43102 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:25:18.382109   43102 start.go:297] selected driver: kvm2
	I0425 19:25:18.382123   43102 start.go:901] validating driver "kvm2" against &{Name:multinode-857482 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-857482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:25:18.382310   43102 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:25:18.382638   43102 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:25:18.382710   43102 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:25:18.397462   43102 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:25:18.398199   43102 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:25:18.398289   43102 cni.go:84] Creating CNI manager for ""
	I0425 19:25:18.398303   43102 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0425 19:25:18.398368   43102 start.go:340] cluster config:
	{Name:multinode-857482 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-857482 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:25:18.398472   43102 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:25:18.400396   43102 out.go:177] * Starting "multinode-857482" primary control-plane node in "multinode-857482" cluster
	I0425 19:25:18.401794   43102 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:25:18.401833   43102 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:25:18.401843   43102 cache.go:56] Caching tarball of preloaded images
	I0425 19:25:18.401910   43102 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:25:18.401920   43102 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 19:25:18.402079   43102 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/config.json ...
	I0425 19:25:18.402318   43102 start.go:360] acquireMachinesLock for multinode-857482: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:25:18.402361   43102 start.go:364] duration metric: took 23.728µs to acquireMachinesLock for "multinode-857482"
	I0425 19:25:18.402375   43102 start.go:96] Skipping create...Using existing machine configuration
	I0425 19:25:18.402382   43102 fix.go:54] fixHost starting: 
	I0425 19:25:18.402642   43102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:25:18.402676   43102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:25:18.415971   43102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I0425 19:25:18.416347   43102 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:25:18.416812   43102 main.go:141] libmachine: Using API Version  1
	I0425 19:25:18.416835   43102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:25:18.417082   43102 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:25:18.417250   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:25:18.417360   43102 main.go:141] libmachine: (multinode-857482) Calling .GetState
	I0425 19:25:18.418810   43102 fix.go:112] recreateIfNeeded on multinode-857482: state=Running err=<nil>
	W0425 19:25:18.418850   43102 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 19:25:18.420624   43102 out.go:177] * Updating the running kvm2 "multinode-857482" VM ...
	I0425 19:25:18.421948   43102 machine.go:94] provisionDockerMachine start ...
	I0425 19:25:18.421971   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:25:18.422165   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:18.424634   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.425005   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:18.425029   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.425189   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:25:18.425337   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.425510   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.425645   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:25:18.425808   43102 main.go:141] libmachine: Using SSH client type: native
	I0425 19:25:18.426068   43102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0425 19:25:18.426096   43102 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 19:25:18.552103   43102 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-857482
	
	I0425 19:25:18.552142   43102 main.go:141] libmachine: (multinode-857482) Calling .GetMachineName
	I0425 19:25:18.552398   43102 buildroot.go:166] provisioning hostname "multinode-857482"
	I0425 19:25:18.552432   43102 main.go:141] libmachine: (multinode-857482) Calling .GetMachineName
	I0425 19:25:18.552623   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:18.555132   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.555516   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:18.555536   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.555697   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:25:18.555870   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.556026   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.556141   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:25:18.556299   43102 main.go:141] libmachine: Using SSH client type: native
	I0425 19:25:18.556504   43102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0425 19:25:18.556520   43102 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-857482 && echo "multinode-857482" | sudo tee /etc/hostname
	I0425 19:25:18.692590   43102 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-857482
	
	I0425 19:25:18.692632   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:18.695505   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.695870   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:18.695899   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.696079   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:25:18.696293   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.696508   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:18.696655   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:25:18.696801   43102 main.go:141] libmachine: Using SSH client type: native
	I0425 19:25:18.696980   43102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0425 19:25:18.697002   43102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-857482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-857482/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-857482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 19:25:18.811904   43102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:25:18.811936   43102 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 19:25:18.811955   43102 buildroot.go:174] setting up certificates
	I0425 19:25:18.811966   43102 provision.go:84] configureAuth start
	I0425 19:25:18.811975   43102 main.go:141] libmachine: (multinode-857482) Calling .GetMachineName
	I0425 19:25:18.812247   43102 main.go:141] libmachine: (multinode-857482) Calling .GetIP
	I0425 19:25:18.814884   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.815213   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:18.815236   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.815406   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:18.817570   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.817929   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:18.817952   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:18.818071   43102 provision.go:143] copyHostCerts
	I0425 19:25:18.818092   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:25:18.818115   43102 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 19:25:18.818124   43102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:25:18.818185   43102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 19:25:18.818314   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:25:18.818342   43102 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 19:25:18.818352   43102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:25:18.818392   43102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 19:25:18.818455   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:25:18.818474   43102 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 19:25:18.818480   43102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:25:18.818503   43102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 19:25:18.818560   43102 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.multinode-857482 san=[127.0.0.1 192.168.39.194 localhost minikube multinode-857482]
	I0425 19:25:19.031008   43102 provision.go:177] copyRemoteCerts
	I0425 19:25:19.031060   43102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 19:25:19.031086   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:19.033802   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:19.034146   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:19.034168   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:19.034402   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:25:19.034607   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:19.034717   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:25:19.034878   43102 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/multinode-857482/id_rsa Username:docker}
	I0425 19:25:19.123261   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 19:25:19.123324   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 19:25:19.152747   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 19:25:19.152818   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0425 19:25:19.183155   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 19:25:19.183238   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 19:25:19.209928   43102 provision.go:87] duration metric: took 397.949061ms to configureAuth
	I0425 19:25:19.209959   43102 buildroot.go:189] setting minikube options for container-runtime
	I0425 19:25:19.210185   43102 config.go:182] Loaded profile config "multinode-857482": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:25:19.210282   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:25:19.213095   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:19.213576   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:25:19.213612   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:25:19.213740   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:25:19.213949   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:19.214113   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:25:19.214255   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:25:19.214435   43102 main.go:141] libmachine: Using SSH client type: native
	I0425 19:25:19.214625   43102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0425 19:25:19.214647   43102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 19:26:49.947661   43102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 19:26:49.947688   43102 machine.go:97] duration metric: took 1m31.525724233s to provisionDockerMachine
	I0425 19:26:49.947700   43102 start.go:293] postStartSetup for "multinode-857482" (driver="kvm2")
	I0425 19:26:49.947710   43102 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 19:26:49.947727   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:26:49.948056   43102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 19:26:49.948099   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:26:49.950895   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:49.951232   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:26:49.951256   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:49.951418   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:26:49.951605   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:26:49.951759   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:26:49.951880   43102 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/multinode-857482/id_rsa Username:docker}
	I0425 19:26:50.043042   43102 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 19:26:50.048344   43102 command_runner.go:130] > NAME=Buildroot
	I0425 19:26:50.048363   43102 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0425 19:26:50.048367   43102 command_runner.go:130] > ID=buildroot
	I0425 19:26:50.048380   43102 command_runner.go:130] > VERSION_ID=2023.02.9
	I0425 19:26:50.048388   43102 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0425 19:26:50.048422   43102 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 19:26:50.048439   43102 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 19:26:50.048509   43102 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 19:26:50.048581   43102 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 19:26:50.048590   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 19:26:50.048664   43102 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 19:26:50.059300   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:26:50.087316   43102 start.go:296] duration metric: took 139.604894ms for postStartSetup
	I0425 19:26:50.087357   43102 fix.go:56] duration metric: took 1m31.684974618s for fixHost
	I0425 19:26:50.087375   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:26:50.090036   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.090399   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:26:50.090434   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.090553   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:26:50.090764   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:26:50.090884   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:26:50.091006   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:26:50.091166   43102 main.go:141] libmachine: Using SSH client type: native
	I0425 19:26:50.091366   43102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.194 22 <nil> <nil>}
	I0425 19:26:50.091379   43102 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 19:26:50.208129   43102 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714073210.188297828
	
	I0425 19:26:50.208155   43102 fix.go:216] guest clock: 1714073210.188297828
	I0425 19:26:50.208163   43102 fix.go:229] Guest: 2024-04-25 19:26:50.188297828 +0000 UTC Remote: 2024-04-25 19:26:50.087360479 +0000 UTC m=+91.819923739 (delta=100.937349ms)
	I0425 19:26:50.208181   43102 fix.go:200] guest clock delta is within tolerance: 100.937349ms
	I0425 19:26:50.208186   43102 start.go:83] releasing machines lock for "multinode-857482", held for 1m31.805817152s
	I0425 19:26:50.208201   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:26:50.208479   43102 main.go:141] libmachine: (multinode-857482) Calling .GetIP
	I0425 19:26:50.211118   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.211534   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:26:50.211554   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.211705   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:26:50.212409   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:26:50.212584   43102 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:26:50.212684   43102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 19:26:50.212718   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:26:50.212780   43102 ssh_runner.go:195] Run: cat /version.json
	I0425 19:26:50.212816   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:26:50.215261   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.215505   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.215638   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:26:50.215665   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.215791   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:26:50.215927   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:26:50.215945   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:26:50.215960   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:26:50.216110   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:26:50.216111   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:26:50.216276   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:26:50.216275   43102 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/multinode-857482/id_rsa Username:docker}
	I0425 19:26:50.216408   43102 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:26:50.216528   43102 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/multinode-857482/id_rsa Username:docker}
	I0425 19:26:50.322587   43102 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0425 19:26:50.322645   43102 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0425 19:26:50.322802   43102 ssh_runner.go:195] Run: systemctl --version
	I0425 19:26:50.328947   43102 command_runner.go:130] > systemd 252 (252)
	I0425 19:26:50.328979   43102 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0425 19:26:50.329223   43102 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 19:26:50.489339   43102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0425 19:26:50.498255   43102 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0425 19:26:50.498671   43102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 19:26:50.498735   43102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 19:26:50.511299   43102 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0425 19:26:50.511396   43102 start.go:494] detecting cgroup driver to use...
	I0425 19:26:50.511461   43102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 19:26:50.529730   43102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 19:26:50.544068   43102 docker.go:217] disabling cri-docker service (if available) ...
	I0425 19:26:50.544129   43102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 19:26:50.558754   43102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 19:26:50.573886   43102 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 19:26:50.728966   43102 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 19:26:50.880723   43102 docker.go:233] disabling docker service ...
	I0425 19:26:50.880802   43102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 19:26:50.905733   43102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 19:26:50.924979   43102 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 19:26:51.079638   43102 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 19:26:51.237279   43102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 19:26:51.252454   43102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 19:26:51.273572   43102 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0425 19:26:51.273949   43102 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 19:26:51.274007   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.285838   43102 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 19:26:51.285912   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.297897   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.309188   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.320337   43102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 19:26:51.332216   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.343879   43102 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.356612   43102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:26:51.369265   43102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 19:26:51.379917   43102 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0425 19:26:51.379999   43102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 19:26:51.390135   43102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:26:51.541085   43102 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 19:27:01.174870   43102 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.633748192s)
	I0425 19:27:01.174911   43102 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 19:27:01.174963   43102 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 19:27:01.180662   43102 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0425 19:27:01.180688   43102 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0425 19:27:01.180694   43102 command_runner.go:130] > Device: 0,22	Inode: 1318        Links: 1
	I0425 19:27:01.180701   43102 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0425 19:27:01.180706   43102 command_runner.go:130] > Access: 2024-04-25 19:27:01.031436493 +0000
	I0425 19:27:01.180712   43102 command_runner.go:130] > Modify: 2024-04-25 19:27:01.031436493 +0000
	I0425 19:27:01.180718   43102 command_runner.go:130] > Change: 2024-04-25 19:27:01.031436493 +0000
	I0425 19:27:01.180722   43102 command_runner.go:130] >  Birth: -
	I0425 19:27:01.180737   43102 start.go:562] Will wait 60s for crictl version
	I0425 19:27:01.180789   43102 ssh_runner.go:195] Run: which crictl
	I0425 19:27:01.184880   43102 command_runner.go:130] > /usr/bin/crictl
	I0425 19:27:01.185061   43102 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 19:27:01.224955   43102 command_runner.go:130] > Version:  0.1.0
	I0425 19:27:01.224982   43102 command_runner.go:130] > RuntimeName:  cri-o
	I0425 19:27:01.224990   43102 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0425 19:27:01.224999   43102 command_runner.go:130] > RuntimeApiVersion:  v1
	I0425 19:27:01.226094   43102 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 19:27:01.226174   43102 ssh_runner.go:195] Run: crio --version
	I0425 19:27:01.257617   43102 command_runner.go:130] > crio version 1.29.1
	I0425 19:27:01.257643   43102 command_runner.go:130] > Version:        1.29.1
	I0425 19:27:01.257651   43102 command_runner.go:130] > GitCommit:      unknown
	I0425 19:27:01.257658   43102 command_runner.go:130] > GitCommitDate:  unknown
	I0425 19:27:01.257664   43102 command_runner.go:130] > GitTreeState:   clean
	I0425 19:27:01.257673   43102 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0425 19:27:01.257680   43102 command_runner.go:130] > GoVersion:      go1.21.6
	I0425 19:27:01.257687   43102 command_runner.go:130] > Compiler:       gc
	I0425 19:27:01.257703   43102 command_runner.go:130] > Platform:       linux/amd64
	I0425 19:27:01.257717   43102 command_runner.go:130] > Linkmode:       dynamic
	I0425 19:27:01.257739   43102 command_runner.go:130] > BuildTags:      
	I0425 19:27:01.257750   43102 command_runner.go:130] >   containers_image_ostree_stub
	I0425 19:27:01.257757   43102 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0425 19:27:01.257767   43102 command_runner.go:130] >   btrfs_noversion
	I0425 19:27:01.257774   43102 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0425 19:27:01.257783   43102 command_runner.go:130] >   libdm_no_deferred_remove
	I0425 19:27:01.257788   43102 command_runner.go:130] >   seccomp
	I0425 19:27:01.257795   43102 command_runner.go:130] > LDFlags:          unknown
	I0425 19:27:01.257805   43102 command_runner.go:130] > SeccompEnabled:   true
	I0425 19:27:01.257811   43102 command_runner.go:130] > AppArmorEnabled:  false
	I0425 19:27:01.259111   43102 ssh_runner.go:195] Run: crio --version
	I0425 19:27:01.290646   43102 command_runner.go:130] > crio version 1.29.1
	I0425 19:27:01.290668   43102 command_runner.go:130] > Version:        1.29.1
	I0425 19:27:01.290674   43102 command_runner.go:130] > GitCommit:      unknown
	I0425 19:27:01.290678   43102 command_runner.go:130] > GitCommitDate:  unknown
	I0425 19:27:01.290683   43102 command_runner.go:130] > GitTreeState:   clean
	I0425 19:27:01.290688   43102 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0425 19:27:01.290692   43102 command_runner.go:130] > GoVersion:      go1.21.6
	I0425 19:27:01.290696   43102 command_runner.go:130] > Compiler:       gc
	I0425 19:27:01.290700   43102 command_runner.go:130] > Platform:       linux/amd64
	I0425 19:27:01.290704   43102 command_runner.go:130] > Linkmode:       dynamic
	I0425 19:27:01.290710   43102 command_runner.go:130] > BuildTags:      
	I0425 19:27:01.290714   43102 command_runner.go:130] >   containers_image_ostree_stub
	I0425 19:27:01.290718   43102 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0425 19:27:01.290722   43102 command_runner.go:130] >   btrfs_noversion
	I0425 19:27:01.290730   43102 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0425 19:27:01.290738   43102 command_runner.go:130] >   libdm_no_deferred_remove
	I0425 19:27:01.290750   43102 command_runner.go:130] >   seccomp
	I0425 19:27:01.290754   43102 command_runner.go:130] > LDFlags:          unknown
	I0425 19:27:01.290758   43102 command_runner.go:130] > SeccompEnabled:   true
	I0425 19:27:01.290762   43102 command_runner.go:130] > AppArmorEnabled:  false
	I0425 19:27:01.294361   43102 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 19:27:01.295728   43102 main.go:141] libmachine: (multinode-857482) Calling .GetIP
	I0425 19:27:01.298276   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:27:01.298694   43102 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:27:01.298724   43102 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:27:01.298908   43102 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 19:27:01.303545   43102 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0425 19:27:01.303754   43102 kubeadm.go:877] updating cluster {Name:multinode-857482 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-857482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 19:27:01.303907   43102 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:27:01.303960   43102 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:27:01.354297   43102 command_runner.go:130] > {
	I0425 19:27:01.354327   43102 command_runner.go:130] >   "images": [
	I0425 19:27:01.354334   43102 command_runner.go:130] >     {
	I0425 19:27:01.354345   43102 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0425 19:27:01.354351   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.354356   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0425 19:27:01.354360   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354364   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.354373   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0425 19:27:01.354380   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0425 19:27:01.354383   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354390   43102 command_runner.go:130] >       "size": "65291810",
	I0425 19:27:01.354400   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.354407   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.354420   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.354430   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.354436   43102 command_runner.go:130] >     },
	I0425 19:27:01.354444   43102 command_runner.go:130] >     {
	I0425 19:27:01.354454   43102 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0425 19:27:01.354458   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.354472   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0425 19:27:01.354481   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354488   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.354500   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0425 19:27:01.354515   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0425 19:27:01.354522   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354528   43102 command_runner.go:130] >       "size": "1363676",
	I0425 19:27:01.354536   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.354546   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.354554   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.354558   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.354564   43102 command_runner.go:130] >     },
	I0425 19:27:01.354572   43102 command_runner.go:130] >     {
	I0425 19:27:01.354582   43102 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0425 19:27:01.354591   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.354601   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0425 19:27:01.354610   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354617   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.354632   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0425 19:27:01.354644   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0425 19:27:01.354652   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354659   43102 command_runner.go:130] >       "size": "31470524",
	I0425 19:27:01.354668   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.354675   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.354684   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.354691   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.354702   43102 command_runner.go:130] >     },
	I0425 19:27:01.354711   43102 command_runner.go:130] >     {
	I0425 19:27:01.354720   43102 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0425 19:27:01.354728   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.354733   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0425 19:27:01.354748   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354756   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.354771   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0425 19:27:01.354795   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0425 19:27:01.354804   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354814   43102 command_runner.go:130] >       "size": "61245718",
	I0425 19:27:01.354823   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.354837   43102 command_runner.go:130] >       "username": "nonroot",
	I0425 19:27:01.354847   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.354853   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.354861   43102 command_runner.go:130] >     },
	I0425 19:27:01.354867   43102 command_runner.go:130] >     {
	I0425 19:27:01.354877   43102 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0425 19:27:01.354887   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.354894   43102 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0425 19:27:01.354901   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354905   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.354919   43102 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0425 19:27:01.354934   43102 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0425 19:27:01.354942   43102 command_runner.go:130] >       ],
	I0425 19:27:01.354950   43102 command_runner.go:130] >       "size": "150779692",
	I0425 19:27:01.354959   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.354965   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.354974   43102 command_runner.go:130] >       },
	I0425 19:27:01.354980   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.354985   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.354989   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.354996   43102 command_runner.go:130] >     },
	I0425 19:27:01.355002   43102 command_runner.go:130] >     {
	I0425 19:27:01.355018   43102 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0425 19:27:01.355028   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.355035   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0425 19:27:01.355043   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355050   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.355065   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0425 19:27:01.355075   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0425 19:27:01.355081   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355089   43102 command_runner.go:130] >       "size": "117609952",
	I0425 19:27:01.355098   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.355105   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.355114   43102 command_runner.go:130] >       },
	I0425 19:27:01.355126   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.355136   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.355142   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.355150   43102 command_runner.go:130] >     },
	I0425 19:27:01.355155   43102 command_runner.go:130] >     {
	I0425 19:27:01.355163   43102 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0425 19:27:01.355169   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.355181   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0425 19:27:01.355190   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355197   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.355213   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0425 19:27:01.355229   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0425 19:27:01.355237   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355243   43102 command_runner.go:130] >       "size": "112170310",
	I0425 19:27:01.355249   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.355254   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.355263   43102 command_runner.go:130] >       },
	I0425 19:27:01.355270   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.355280   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.355289   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.355294   43102 command_runner.go:130] >     },
	I0425 19:27:01.355300   43102 command_runner.go:130] >     {
	I0425 19:27:01.355312   43102 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0425 19:27:01.355322   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.355328   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0425 19:27:01.355332   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355337   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.355365   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0425 19:27:01.355382   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0425 19:27:01.355390   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355397   43102 command_runner.go:130] >       "size": "85932953",
	I0425 19:27:01.355405   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.355412   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.355417   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.355421   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.355423   43102 command_runner.go:130] >     },
	I0425 19:27:01.355429   43102 command_runner.go:130] >     {
	I0425 19:27:01.355439   43102 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0425 19:27:01.355445   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.355453   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0425 19:27:01.355459   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355466   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.355481   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0425 19:27:01.355495   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0425 19:27:01.355502   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355507   43102 command_runner.go:130] >       "size": "63026502",
	I0425 19:27:01.355515   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.355521   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.355530   43102 command_runner.go:130] >       },
	I0425 19:27:01.355537   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.355546   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.355553   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.355560   43102 command_runner.go:130] >     },
	I0425 19:27:01.355565   43102 command_runner.go:130] >     {
	I0425 19:27:01.355576   43102 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0425 19:27:01.355588   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.355598   43102 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0425 19:27:01.355605   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355616   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.355628   43102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0425 19:27:01.355643   43102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0425 19:27:01.355651   43102 command_runner.go:130] >       ],
	I0425 19:27:01.355658   43102 command_runner.go:130] >       "size": "750414",
	I0425 19:27:01.355666   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.355674   43102 command_runner.go:130] >         "value": "65535"
	I0425 19:27:01.355685   43102 command_runner.go:130] >       },
	I0425 19:27:01.355692   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.355701   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.355707   43102 command_runner.go:130] >       "pinned": true
	I0425 19:27:01.355716   43102 command_runner.go:130] >     }
	I0425 19:27:01.355722   43102 command_runner.go:130] >   ]
	I0425 19:27:01.355731   43102 command_runner.go:130] > }
	I0425 19:27:01.356065   43102 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:27:01.356088   43102 crio.go:433] Images already preloaded, skipping extraction
	I0425 19:27:01.356148   43102 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:27:01.399050   43102 command_runner.go:130] > {
	I0425 19:27:01.399071   43102 command_runner.go:130] >   "images": [
	I0425 19:27:01.399075   43102 command_runner.go:130] >     {
	I0425 19:27:01.399083   43102 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0425 19:27:01.399087   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399093   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0425 19:27:01.399096   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399100   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399108   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0425 19:27:01.399115   43102 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0425 19:27:01.399118   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399123   43102 command_runner.go:130] >       "size": "65291810",
	I0425 19:27:01.399126   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.399130   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399140   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399144   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399152   43102 command_runner.go:130] >     },
	I0425 19:27:01.399156   43102 command_runner.go:130] >     {
	I0425 19:27:01.399162   43102 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0425 19:27:01.399170   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399175   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0425 19:27:01.399178   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399183   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399190   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0425 19:27:01.399204   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0425 19:27:01.399207   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399211   43102 command_runner.go:130] >       "size": "1363676",
	I0425 19:27:01.399215   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.399225   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399231   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399236   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399242   43102 command_runner.go:130] >     },
	I0425 19:27:01.399245   43102 command_runner.go:130] >     {
	I0425 19:27:01.399253   43102 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0425 19:27:01.399258   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399265   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0425 19:27:01.399272   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399276   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399286   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0425 19:27:01.399294   43102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0425 19:27:01.399301   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399305   43102 command_runner.go:130] >       "size": "31470524",
	I0425 19:27:01.399311   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.399325   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399335   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399339   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399343   43102 command_runner.go:130] >     },
	I0425 19:27:01.399346   43102 command_runner.go:130] >     {
	I0425 19:27:01.399353   43102 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0425 19:27:01.399359   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399364   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0425 19:27:01.399371   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399379   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399389   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0425 19:27:01.399421   43102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0425 19:27:01.399430   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399434   43102 command_runner.go:130] >       "size": "61245718",
	I0425 19:27:01.399439   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.399443   43102 command_runner.go:130] >       "username": "nonroot",
	I0425 19:27:01.399449   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399456   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399460   43102 command_runner.go:130] >     },
	I0425 19:27:01.399465   43102 command_runner.go:130] >     {
	I0425 19:27:01.399472   43102 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0425 19:27:01.399478   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399483   43102 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0425 19:27:01.399488   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399492   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399502   43102 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0425 19:27:01.399511   43102 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0425 19:27:01.399517   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399521   43102 command_runner.go:130] >       "size": "150779692",
	I0425 19:27:01.399527   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.399531   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.399535   43102 command_runner.go:130] >       },
	I0425 19:27:01.399539   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399543   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399550   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399553   43102 command_runner.go:130] >     },
	I0425 19:27:01.399560   43102 command_runner.go:130] >     {
	I0425 19:27:01.399566   43102 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0425 19:27:01.399572   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399578   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0425 19:27:01.399584   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399587   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399597   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0425 19:27:01.399606   43102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0425 19:27:01.399612   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399621   43102 command_runner.go:130] >       "size": "117609952",
	I0425 19:27:01.399628   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.399632   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.399641   43102 command_runner.go:130] >       },
	I0425 19:27:01.399648   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399653   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399659   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399663   43102 command_runner.go:130] >     },
	I0425 19:27:01.399669   43102 command_runner.go:130] >     {
	I0425 19:27:01.399675   43102 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0425 19:27:01.399681   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399687   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0425 19:27:01.399693   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399697   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399706   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0425 19:27:01.399719   43102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0425 19:27:01.399733   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399743   43102 command_runner.go:130] >       "size": "112170310",
	I0425 19:27:01.399749   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.399760   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.399768   43102 command_runner.go:130] >       },
	I0425 19:27:01.399774   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399783   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399789   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399797   43102 command_runner.go:130] >     },
	I0425 19:27:01.399802   43102 command_runner.go:130] >     {
	I0425 19:27:01.399815   43102 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0425 19:27:01.399823   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399832   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0425 19:27:01.399841   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399847   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399874   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0425 19:27:01.399885   43102 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0425 19:27:01.399890   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399894   43102 command_runner.go:130] >       "size": "85932953",
	I0425 19:27:01.399901   43102 command_runner.go:130] >       "uid": null,
	I0425 19:27:01.399909   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.399916   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.399920   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.399926   43102 command_runner.go:130] >     },
	I0425 19:27:01.399930   43102 command_runner.go:130] >     {
	I0425 19:27:01.399938   43102 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0425 19:27:01.399944   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.399949   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0425 19:27:01.399952   43102 command_runner.go:130] >       ],
	I0425 19:27:01.399958   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.399965   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0425 19:27:01.399988   43102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0425 19:27:01.399997   43102 command_runner.go:130] >       ],
	I0425 19:27:01.400001   43102 command_runner.go:130] >       "size": "63026502",
	I0425 19:27:01.400007   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.400012   43102 command_runner.go:130] >         "value": "0"
	I0425 19:27:01.400018   43102 command_runner.go:130] >       },
	I0425 19:27:01.400022   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.400029   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.400033   43102 command_runner.go:130] >       "pinned": false
	I0425 19:27:01.400036   43102 command_runner.go:130] >     },
	I0425 19:27:01.400040   43102 command_runner.go:130] >     {
	I0425 19:27:01.400047   43102 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0425 19:27:01.400053   43102 command_runner.go:130] >       "repoTags": [
	I0425 19:27:01.400057   43102 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0425 19:27:01.400063   43102 command_runner.go:130] >       ],
	I0425 19:27:01.400068   43102 command_runner.go:130] >       "repoDigests": [
	I0425 19:27:01.400076   43102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0425 19:27:01.400087   43102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0425 19:27:01.400093   43102 command_runner.go:130] >       ],
	I0425 19:27:01.400097   43102 command_runner.go:130] >       "size": "750414",
	I0425 19:27:01.400103   43102 command_runner.go:130] >       "uid": {
	I0425 19:27:01.400107   43102 command_runner.go:130] >         "value": "65535"
	I0425 19:27:01.400113   43102 command_runner.go:130] >       },
	I0425 19:27:01.400117   43102 command_runner.go:130] >       "username": "",
	I0425 19:27:01.400121   43102 command_runner.go:130] >       "spec": null,
	I0425 19:27:01.400130   43102 command_runner.go:130] >       "pinned": true
	I0425 19:27:01.400136   43102 command_runner.go:130] >     }
	I0425 19:27:01.400140   43102 command_runner.go:130] >   ]
	I0425 19:27:01.400145   43102 command_runner.go:130] > }
	I0425 19:27:01.400255   43102 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:27:01.400265   43102 cache_images.go:84] Images are preloaded, skipping loading
	I0425 19:27:01.400273   43102 kubeadm.go:928] updating node { 192.168.39.194 8443 v1.30.0 crio true true} ...
	I0425 19:27:01.400378   43102 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-857482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-857482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 19:27:01.400444   43102 ssh_runner.go:195] Run: crio config
	I0425 19:27:01.444538   43102 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0425 19:27:01.444571   43102 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0425 19:27:01.444582   43102 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0425 19:27:01.444587   43102 command_runner.go:130] > #
	I0425 19:27:01.444598   43102 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0425 19:27:01.444607   43102 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0425 19:27:01.444618   43102 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0425 19:27:01.444630   43102 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0425 19:27:01.444636   43102 command_runner.go:130] > # reload'.
	I0425 19:27:01.444650   43102 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0425 19:27:01.444661   43102 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0425 19:27:01.444674   43102 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0425 19:27:01.444685   43102 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0425 19:27:01.444694   43102 command_runner.go:130] > [crio]
	I0425 19:27:01.444705   43102 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0425 19:27:01.444715   43102 command_runner.go:130] > # containers images, in this directory.
	I0425 19:27:01.444722   43102 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0425 19:27:01.444753   43102 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0425 19:27:01.444764   43102 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0425 19:27:01.444775   43102 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0425 19:27:01.444785   43102 command_runner.go:130] > # imagestore = ""
	I0425 19:27:01.444795   43102 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0425 19:27:01.444807   43102 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0425 19:27:01.444817   43102 command_runner.go:130] > storage_driver = "overlay"
	I0425 19:27:01.444830   43102 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0425 19:27:01.444840   43102 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0425 19:27:01.444847   43102 command_runner.go:130] > storage_option = [
	I0425 19:27:01.444858   43102 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0425 19:27:01.444866   43102 command_runner.go:130] > ]
	I0425 19:27:01.444875   43102 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0425 19:27:01.444887   43102 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0425 19:27:01.444897   43102 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0425 19:27:01.444905   43102 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0425 19:27:01.444918   43102 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0425 19:27:01.444928   43102 command_runner.go:130] > # always happen on a node reboot
	I0425 19:27:01.444937   43102 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0425 19:27:01.444960   43102 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0425 19:27:01.444973   43102 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0425 19:27:01.444983   43102 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0425 19:27:01.444991   43102 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0425 19:27:01.445005   43102 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0425 19:27:01.445017   43102 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0425 19:27:01.445027   43102 command_runner.go:130] > # internal_wipe = true
	I0425 19:27:01.445039   43102 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0425 19:27:01.445050   43102 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0425 19:27:01.445056   43102 command_runner.go:130] > # internal_repair = false
	I0425 19:27:01.445068   43102 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0425 19:27:01.445080   43102 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0425 19:27:01.445092   43102 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0425 19:27:01.445103   43102 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0425 19:27:01.445114   43102 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0425 19:27:01.445123   43102 command_runner.go:130] > [crio.api]
	I0425 19:27:01.445132   43102 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0425 19:27:01.445146   43102 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0425 19:27:01.445163   43102 command_runner.go:130] > # IP address on which the stream server will listen.
	I0425 19:27:01.445174   43102 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0425 19:27:01.445186   43102 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0425 19:27:01.445198   43102 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0425 19:27:01.445207   43102 command_runner.go:130] > # stream_port = "0"
	I0425 19:27:01.445218   43102 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0425 19:27:01.445228   43102 command_runner.go:130] > # stream_enable_tls = false
	I0425 19:27:01.445237   43102 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0425 19:27:01.445246   43102 command_runner.go:130] > # stream_idle_timeout = ""
	I0425 19:27:01.445256   43102 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0425 19:27:01.445268   43102 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0425 19:27:01.445275   43102 command_runner.go:130] > # minutes.
	I0425 19:27:01.445283   43102 command_runner.go:130] > # stream_tls_cert = ""
	I0425 19:27:01.445292   43102 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0425 19:27:01.445306   43102 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0425 19:27:01.445325   43102 command_runner.go:130] > # stream_tls_key = ""
	I0425 19:27:01.445338   43102 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0425 19:27:01.445353   43102 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0425 19:27:01.445370   43102 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0425 19:27:01.445380   43102 command_runner.go:130] > # stream_tls_ca = ""
	I0425 19:27:01.445391   43102 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0425 19:27:01.445401   43102 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0425 19:27:01.445412   43102 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0425 19:27:01.445422   43102 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0425 19:27:01.445431   43102 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0425 19:27:01.445442   43102 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0425 19:27:01.445451   43102 command_runner.go:130] > [crio.runtime]
	I0425 19:27:01.445461   43102 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0425 19:27:01.445472   43102 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0425 19:27:01.445481   43102 command_runner.go:130] > # "nofile=1024:2048"
	I0425 19:27:01.445491   43102 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0425 19:27:01.445500   43102 command_runner.go:130] > # default_ulimits = [
	I0425 19:27:01.445505   43102 command_runner.go:130] > # ]
	I0425 19:27:01.445515   43102 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0425 19:27:01.445524   43102 command_runner.go:130] > # no_pivot = false
	I0425 19:27:01.445533   43102 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0425 19:27:01.445547   43102 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0425 19:27:01.445559   43102 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0425 19:27:01.445578   43102 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0425 19:27:01.445589   43102 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0425 19:27:01.445603   43102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0425 19:27:01.445611   43102 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0425 19:27:01.445622   43102 command_runner.go:130] > # Cgroup setting for conmon
	I0425 19:27:01.445637   43102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0425 19:27:01.445649   43102 command_runner.go:130] > conmon_cgroup = "pod"
	I0425 19:27:01.445663   43102 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0425 19:27:01.445675   43102 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0425 19:27:01.445689   43102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0425 19:27:01.445699   43102 command_runner.go:130] > conmon_env = [
	I0425 19:27:01.445708   43102 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0425 19:27:01.445717   43102 command_runner.go:130] > ]
	I0425 19:27:01.445725   43102 command_runner.go:130] > # Additional environment variables to set for all the
	I0425 19:27:01.445736   43102 command_runner.go:130] > # containers. These are overridden if set in the
	I0425 19:27:01.445750   43102 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0425 19:27:01.445759   43102 command_runner.go:130] > # default_env = [
	I0425 19:27:01.445764   43102 command_runner.go:130] > # ]
	I0425 19:27:01.445777   43102 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0425 19:27:01.445792   43102 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0425 19:27:01.445801   43102 command_runner.go:130] > # selinux = false
	I0425 19:27:01.445812   43102 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0425 19:27:01.445826   43102 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0425 19:27:01.445839   43102 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0425 19:27:01.445849   43102 command_runner.go:130] > # seccomp_profile = ""
	I0425 19:27:01.445859   43102 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0425 19:27:01.445873   43102 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0425 19:27:01.445885   43102 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0425 19:27:01.445893   43102 command_runner.go:130] > # which might increase security.
	I0425 19:27:01.445902   43102 command_runner.go:130] > # This option is currently deprecated,
	I0425 19:27:01.445912   43102 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0425 19:27:01.445923   43102 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0425 19:27:01.445936   43102 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0425 19:27:01.445947   43102 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0425 19:27:01.445961   43102 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0425 19:27:01.445974   43102 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0425 19:27:01.445984   43102 command_runner.go:130] > # This option supports live configuration reload.
	I0425 19:27:01.445996   43102 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0425 19:27:01.446007   43102 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0425 19:27:01.446017   43102 command_runner.go:130] > # the cgroup blockio controller.
	I0425 19:27:01.446025   43102 command_runner.go:130] > # blockio_config_file = ""
	I0425 19:27:01.446039   43102 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0425 19:27:01.446049   43102 command_runner.go:130] > # blockio parameters.
	I0425 19:27:01.446057   43102 command_runner.go:130] > # blockio_reload = false
	I0425 19:27:01.446071   43102 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0425 19:27:01.446081   43102 command_runner.go:130] > # irqbalance daemon.
	I0425 19:27:01.446090   43102 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0425 19:27:01.446103   43102 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0425 19:27:01.446117   43102 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0425 19:27:01.446131   43102 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0425 19:27:01.446143   43102 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0425 19:27:01.446158   43102 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0425 19:27:01.446170   43102 command_runner.go:130] > # This option supports live configuration reload.
	I0425 19:27:01.446176   43102 command_runner.go:130] > # rdt_config_file = ""
	I0425 19:27:01.446187   43102 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0425 19:27:01.446198   43102 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0425 19:27:01.446231   43102 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0425 19:27:01.446242   43102 command_runner.go:130] > # separate_pull_cgroup = ""
	I0425 19:27:01.446252   43102 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0425 19:27:01.446264   43102 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0425 19:27:01.446273   43102 command_runner.go:130] > # will be added.
	I0425 19:27:01.446281   43102 command_runner.go:130] > # default_capabilities = [
	I0425 19:27:01.446289   43102 command_runner.go:130] > # 	"CHOWN",
	I0425 19:27:01.446295   43102 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0425 19:27:01.446304   43102 command_runner.go:130] > # 	"FSETID",
	I0425 19:27:01.446308   43102 command_runner.go:130] > # 	"FOWNER",
	I0425 19:27:01.446319   43102 command_runner.go:130] > # 	"SETGID",
	I0425 19:27:01.446328   43102 command_runner.go:130] > # 	"SETUID",
	I0425 19:27:01.446333   43102 command_runner.go:130] > # 	"SETPCAP",
	I0425 19:27:01.446342   43102 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0425 19:27:01.446347   43102 command_runner.go:130] > # 	"KILL",
	I0425 19:27:01.446355   43102 command_runner.go:130] > # ]
	I0425 19:27:01.446366   43102 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0425 19:27:01.446380   43102 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0425 19:27:01.446390   43102 command_runner.go:130] > # add_inheritable_capabilities = false
	I0425 19:27:01.446404   43102 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0425 19:27:01.446419   43102 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0425 19:27:01.446425   43102 command_runner.go:130] > default_sysctls = [
	I0425 19:27:01.446440   43102 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0425 19:27:01.446448   43102 command_runner.go:130] > ]
	I0425 19:27:01.446456   43102 command_runner.go:130] > # List of devices on the host that a
	I0425 19:27:01.446471   43102 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0425 19:27:01.446481   43102 command_runner.go:130] > # allowed_devices = [
	I0425 19:27:01.446489   43102 command_runner.go:130] > # 	"/dev/fuse",
	I0425 19:27:01.446498   43102 command_runner.go:130] > # ]
	I0425 19:27:01.446506   43102 command_runner.go:130] > # List of additional devices. specified as
	I0425 19:27:01.446520   43102 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0425 19:27:01.446536   43102 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0425 19:27:01.446549   43102 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0425 19:27:01.446559   43102 command_runner.go:130] > # additional_devices = [
	I0425 19:27:01.446564   43102 command_runner.go:130] > # ]
	I0425 19:27:01.446574   43102 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0425 19:27:01.446584   43102 command_runner.go:130] > # cdi_spec_dirs = [
	I0425 19:27:01.446589   43102 command_runner.go:130] > # 	"/etc/cdi",
	I0425 19:27:01.446599   43102 command_runner.go:130] > # 	"/var/run/cdi",
	I0425 19:27:01.446604   43102 command_runner.go:130] > # ]
	I0425 19:27:01.446615   43102 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0425 19:27:01.446628   43102 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0425 19:27:01.446639   43102 command_runner.go:130] > # Defaults to false.
	I0425 19:27:01.446649   43102 command_runner.go:130] > # device_ownership_from_security_context = false
	I0425 19:27:01.446662   43102 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0425 19:27:01.446675   43102 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0425 19:27:01.446684   43102 command_runner.go:130] > # hooks_dir = [
	I0425 19:27:01.446691   43102 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0425 19:27:01.446699   43102 command_runner.go:130] > # ]
	I0425 19:27:01.446708   43102 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0425 19:27:01.446721   43102 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0425 19:27:01.446732   43102 command_runner.go:130] > # its default mounts from the following two files:
	I0425 19:27:01.446741   43102 command_runner.go:130] > #
	I0425 19:27:01.446750   43102 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0425 19:27:01.446764   43102 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0425 19:27:01.446775   43102 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0425 19:27:01.446783   43102 command_runner.go:130] > #
	I0425 19:27:01.446792   43102 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0425 19:27:01.446805   43102 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0425 19:27:01.446818   43102 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0425 19:27:01.446828   43102 command_runner.go:130] > #      only add mounts it finds in this file.
	I0425 19:27:01.446834   43102 command_runner.go:130] > #
	I0425 19:27:01.446841   43102 command_runner.go:130] > # default_mounts_file = ""
	I0425 19:27:01.446852   43102 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0425 19:27:01.446870   43102 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0425 19:27:01.446879   43102 command_runner.go:130] > pids_limit = 1024
	I0425 19:27:01.446889   43102 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0425 19:27:01.446903   43102 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0425 19:27:01.446916   43102 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0425 19:27:01.446931   43102 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0425 19:27:01.446941   43102 command_runner.go:130] > # log_size_max = -1
	I0425 19:27:01.446953   43102 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0425 19:27:01.446964   43102 command_runner.go:130] > # log_to_journald = false
	I0425 19:27:01.446977   43102 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0425 19:27:01.446985   43102 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0425 19:27:01.446997   43102 command_runner.go:130] > # Path to directory for container attach sockets.
	I0425 19:27:01.447005   43102 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0425 19:27:01.447018   43102 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0425 19:27:01.447028   43102 command_runner.go:130] > # bind_mount_prefix = ""
	I0425 19:27:01.447037   43102 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0425 19:27:01.447047   43102 command_runner.go:130] > # read_only = false
	I0425 19:27:01.447056   43102 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0425 19:27:01.447069   43102 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0425 19:27:01.447078   43102 command_runner.go:130] > # live configuration reload.
	I0425 19:27:01.447087   43102 command_runner.go:130] > # log_level = "info"
	I0425 19:27:01.447097   43102 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0425 19:27:01.447108   43102 command_runner.go:130] > # This option supports live configuration reload.
	I0425 19:27:01.447116   43102 command_runner.go:130] > # log_filter = ""
	I0425 19:27:01.447128   43102 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0425 19:27:01.447140   43102 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0425 19:27:01.447150   43102 command_runner.go:130] > # separated by comma.
	I0425 19:27:01.447162   43102 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0425 19:27:01.447171   43102 command_runner.go:130] > # uid_mappings = ""
	I0425 19:27:01.447181   43102 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0425 19:27:01.447196   43102 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0425 19:27:01.447205   43102 command_runner.go:130] > # separated by comma.
	I0425 19:27:01.447216   43102 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0425 19:27:01.447226   43102 command_runner.go:130] > # gid_mappings = ""
	I0425 19:27:01.447236   43102 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0425 19:27:01.447249   43102 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0425 19:27:01.447265   43102 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0425 19:27:01.447280   43102 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0425 19:27:01.447290   43102 command_runner.go:130] > # minimum_mappable_uid = -1
	I0425 19:27:01.447301   43102 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0425 19:27:01.447319   43102 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0425 19:27:01.447331   43102 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0425 19:27:01.447345   43102 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0425 19:27:01.447351   43102 command_runner.go:130] > # minimum_mappable_gid = -1
	I0425 19:27:01.447364   43102 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0425 19:27:01.447376   43102 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0425 19:27:01.447388   43102 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0425 19:27:01.447398   43102 command_runner.go:130] > # ctr_stop_timeout = 30
	I0425 19:27:01.447406   43102 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0425 19:27:01.447420   43102 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0425 19:27:01.447431   43102 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0425 19:27:01.447441   43102 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0425 19:27:01.447452   43102 command_runner.go:130] > drop_infra_ctr = false
	I0425 19:27:01.447465   43102 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0425 19:27:01.447479   43102 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0425 19:27:01.447496   43102 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0425 19:27:01.447506   43102 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0425 19:27:01.447516   43102 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0425 19:27:01.447530   43102 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0425 19:27:01.447542   43102 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0425 19:27:01.447554   43102 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0425 19:27:01.447560   43102 command_runner.go:130] > # shared_cpuset = ""
	I0425 19:27:01.447575   43102 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0425 19:27:01.447586   43102 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0425 19:27:01.447596   43102 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0425 19:27:01.447606   43102 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0425 19:27:01.447616   43102 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0425 19:27:01.447626   43102 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0425 19:27:01.447638   43102 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0425 19:27:01.447648   43102 command_runner.go:130] > # enable_criu_support = false
	I0425 19:27:01.447656   43102 command_runner.go:130] > # Enable/disable the generation of the container,
	I0425 19:27:01.447673   43102 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0425 19:27:01.447682   43102 command_runner.go:130] > # enable_pod_events = false
	I0425 19:27:01.447693   43102 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0425 19:27:01.447706   43102 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0425 19:27:01.447719   43102 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0425 19:27:01.447728   43102 command_runner.go:130] > # default_runtime = "runc"
	I0425 19:27:01.447736   43102 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0425 19:27:01.447750   43102 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0425 19:27:01.447763   43102 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0425 19:27:01.447774   43102 command_runner.go:130] > # creation as a file is not desired either.
	I0425 19:27:01.447786   43102 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0425 19:27:01.447797   43102 command_runner.go:130] > # the hostname is being managed dynamically.
	I0425 19:27:01.447806   43102 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0425 19:27:01.447815   43102 command_runner.go:130] > # ]
	I0425 19:27:01.447826   43102 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0425 19:27:01.447840   43102 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0425 19:27:01.447851   43102 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0425 19:27:01.447863   43102 command_runner.go:130] > # Each entry in the table should follow the format:
	I0425 19:27:01.447870   43102 command_runner.go:130] > #
	I0425 19:27:01.447877   43102 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0425 19:27:01.447885   43102 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0425 19:27:01.447907   43102 command_runner.go:130] > # runtime_type = "oci"
	I0425 19:27:01.447916   43102 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0425 19:27:01.447924   43102 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0425 19:27:01.447933   43102 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0425 19:27:01.447940   43102 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0425 19:27:01.447949   43102 command_runner.go:130] > # monitor_env = []
	I0425 19:27:01.447956   43102 command_runner.go:130] > # privileged_without_host_devices = false
	I0425 19:27:01.447967   43102 command_runner.go:130] > # allowed_annotations = []
	I0425 19:27:01.447979   43102 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0425 19:27:01.447988   43102 command_runner.go:130] > # Where:
	I0425 19:27:01.447996   43102 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0425 19:27:01.448009   43102 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0425 19:27:01.448018   43102 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0425 19:27:01.448031   43102 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0425 19:27:01.448039   43102 command_runner.go:130] > #   in $PATH.
	I0425 19:27:01.448049   43102 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0425 19:27:01.448059   43102 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0425 19:27:01.448077   43102 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0425 19:27:01.448085   43102 command_runner.go:130] > #   state.
	I0425 19:27:01.448097   43102 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0425 19:27:01.448110   43102 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0425 19:27:01.448121   43102 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0425 19:27:01.448132   43102 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0425 19:27:01.448146   43102 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0425 19:27:01.448160   43102 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0425 19:27:01.448170   43102 command_runner.go:130] > #   The currently recognized values are:
	I0425 19:27:01.448182   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0425 19:27:01.448197   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0425 19:27:01.448208   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0425 19:27:01.448219   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0425 19:27:01.448233   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0425 19:27:01.448245   43102 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0425 19:27:01.448258   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0425 19:27:01.448270   43102 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0425 19:27:01.448282   43102 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0425 19:27:01.448294   43102 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0425 19:27:01.448301   43102 command_runner.go:130] > #   deprecated option "conmon".
	I0425 19:27:01.448321   43102 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0425 19:27:01.448333   43102 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0425 19:27:01.448348   43102 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0425 19:27:01.448359   43102 command_runner.go:130] > #   should be moved to the container's cgroup
	I0425 19:27:01.448372   43102 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0425 19:27:01.448386   43102 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0425 19:27:01.448401   43102 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0425 19:27:01.448415   43102 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0425 19:27:01.448423   43102 command_runner.go:130] > #
	I0425 19:27:01.448429   43102 command_runner.go:130] > # Using the seccomp notifier feature:
	I0425 19:27:01.448437   43102 command_runner.go:130] > #
	I0425 19:27:01.448446   43102 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0425 19:27:01.448459   43102 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0425 19:27:01.448467   43102 command_runner.go:130] > #
	I0425 19:27:01.448480   43102 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0425 19:27:01.448491   43102 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0425 19:27:01.448501   43102 command_runner.go:130] > #
	I0425 19:27:01.448512   43102 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0425 19:27:01.448523   43102 command_runner.go:130] > # feature.
	I0425 19:27:01.448532   43102 command_runner.go:130] > #
	I0425 19:27:01.448541   43102 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0425 19:27:01.448554   43102 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0425 19:27:01.448566   43102 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0425 19:27:01.448579   43102 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0425 19:27:01.448590   43102 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0425 19:27:01.448597   43102 command_runner.go:130] > #
	I0425 19:27:01.448606   43102 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0425 19:27:01.448619   43102 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0425 19:27:01.448627   43102 command_runner.go:130] > #
	I0425 19:27:01.448637   43102 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0425 19:27:01.448647   43102 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0425 19:27:01.448653   43102 command_runner.go:130] > #
	I0425 19:27:01.448658   43102 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0425 19:27:01.448667   43102 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0425 19:27:01.448671   43102 command_runner.go:130] > # limitation.
	I0425 19:27:01.448677   43102 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0425 19:27:01.448682   43102 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0425 19:27:01.448686   43102 command_runner.go:130] > runtime_type = "oci"
	I0425 19:27:01.448690   43102 command_runner.go:130] > runtime_root = "/run/runc"
	I0425 19:27:01.448696   43102 command_runner.go:130] > runtime_config_path = ""
	I0425 19:27:01.448700   43102 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0425 19:27:01.448706   43102 command_runner.go:130] > monitor_cgroup = "pod"
	I0425 19:27:01.448710   43102 command_runner.go:130] > monitor_exec_cgroup = ""
	I0425 19:27:01.448714   43102 command_runner.go:130] > monitor_env = [
	I0425 19:27:01.448726   43102 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0425 19:27:01.448733   43102 command_runner.go:130] > ]
	I0425 19:27:01.448744   43102 command_runner.go:130] > privileged_without_host_devices = false
	I0425 19:27:01.448757   43102 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0425 19:27:01.448768   43102 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0425 19:27:01.448778   43102 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0425 19:27:01.448793   43102 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0425 19:27:01.448808   43102 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0425 19:27:01.448820   43102 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0425 19:27:01.448845   43102 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0425 19:27:01.448861   43102 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0425 19:27:01.448876   43102 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0425 19:27:01.448890   43102 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0425 19:27:01.448899   43102 command_runner.go:130] > # Example:
	I0425 19:27:01.448907   43102 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0425 19:27:01.448917   43102 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0425 19:27:01.448923   43102 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0425 19:27:01.448929   43102 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0425 19:27:01.448932   43102 command_runner.go:130] > # cpuset = 0
	I0425 19:27:01.448937   43102 command_runner.go:130] > # cpushares = "0-1"
	I0425 19:27:01.448941   43102 command_runner.go:130] > # Where:
	I0425 19:27:01.448945   43102 command_runner.go:130] > # The workload name is workload-type.
	I0425 19:27:01.448954   43102 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0425 19:27:01.448961   43102 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0425 19:27:01.448966   43102 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0425 19:27:01.448976   43102 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0425 19:27:01.448982   43102 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0425 19:27:01.448989   43102 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0425 19:27:01.448995   43102 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0425 19:27:01.449002   43102 command_runner.go:130] > # Default value is set to true
	I0425 19:27:01.449007   43102 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0425 19:27:01.449014   43102 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0425 19:27:01.449019   43102 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0425 19:27:01.449023   43102 command_runner.go:130] > # Default value is set to 'false'
	I0425 19:27:01.449029   43102 command_runner.go:130] > # disable_hostport_mapping = false
	I0425 19:27:01.449036   43102 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0425 19:27:01.449042   43102 command_runner.go:130] > #
	I0425 19:27:01.449047   43102 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0425 19:27:01.449055   43102 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0425 19:27:01.449061   43102 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0425 19:27:01.449067   43102 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0425 19:27:01.449072   43102 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0425 19:27:01.449075   43102 command_runner.go:130] > [crio.image]
	I0425 19:27:01.449081   43102 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0425 19:27:01.449085   43102 command_runner.go:130] > # default_transport = "docker://"
	I0425 19:27:01.449093   43102 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0425 19:27:01.449099   43102 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0425 19:27:01.449103   43102 command_runner.go:130] > # global_auth_file = ""
	I0425 19:27:01.449108   43102 command_runner.go:130] > # The image used to instantiate infra containers.
	I0425 19:27:01.449112   43102 command_runner.go:130] > # This option supports live configuration reload.
	I0425 19:27:01.449117   43102 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0425 19:27:01.449123   43102 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0425 19:27:01.449128   43102 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0425 19:27:01.449133   43102 command_runner.go:130] > # This option supports live configuration reload.
	I0425 19:27:01.449137   43102 command_runner.go:130] > # pause_image_auth_file = ""
	I0425 19:27:01.449142   43102 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0425 19:27:01.449150   43102 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0425 19:27:01.449156   43102 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0425 19:27:01.449163   43102 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0425 19:27:01.449167   43102 command_runner.go:130] > # pause_command = "/pause"
	I0425 19:27:01.449176   43102 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0425 19:27:01.449182   43102 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0425 19:27:01.449187   43102 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0425 19:27:01.449193   43102 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0425 19:27:01.449201   43102 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0425 19:27:01.449207   43102 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0425 19:27:01.449213   43102 command_runner.go:130] > # pinned_images = [
	I0425 19:27:01.449217   43102 command_runner.go:130] > # ]
	I0425 19:27:01.449223   43102 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0425 19:27:01.449233   43102 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0425 19:27:01.449240   43102 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0425 19:27:01.449247   43102 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0425 19:27:01.449252   43102 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0425 19:27:01.449258   43102 command_runner.go:130] > # signature_policy = ""
	I0425 19:27:01.449264   43102 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0425 19:27:01.449274   43102 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0425 19:27:01.449279   43102 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0425 19:27:01.449288   43102 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0425 19:27:01.449293   43102 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0425 19:27:01.449297   43102 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0425 19:27:01.449305   43102 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0425 19:27:01.449318   43102 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0425 19:27:01.449323   43102 command_runner.go:130] > # changing them here.
	I0425 19:27:01.449327   43102 command_runner.go:130] > # insecure_registries = [
	I0425 19:27:01.449330   43102 command_runner.go:130] > # ]
	I0425 19:27:01.449336   43102 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0425 19:27:01.449342   43102 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0425 19:27:01.449346   43102 command_runner.go:130] > # image_volumes = "mkdir"
	I0425 19:27:01.449350   43102 command_runner.go:130] > # Temporary directory to use for storing big files
	I0425 19:27:01.449355   43102 command_runner.go:130] > # big_files_temporary_dir = ""
	I0425 19:27:01.449362   43102 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0425 19:27:01.449366   43102 command_runner.go:130] > # CNI plugins.
	I0425 19:27:01.449370   43102 command_runner.go:130] > [crio.network]
	I0425 19:27:01.449376   43102 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0425 19:27:01.449381   43102 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0425 19:27:01.449385   43102 command_runner.go:130] > # cni_default_network = ""
	I0425 19:27:01.449391   43102 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0425 19:27:01.449397   43102 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0425 19:27:01.449403   43102 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0425 19:27:01.449407   43102 command_runner.go:130] > # plugin_dirs = [
	I0425 19:27:01.449413   43102 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0425 19:27:01.449416   43102 command_runner.go:130] > # ]
	I0425 19:27:01.449422   43102 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0425 19:27:01.449427   43102 command_runner.go:130] > [crio.metrics]
	I0425 19:27:01.449432   43102 command_runner.go:130] > # Globally enable or disable metrics support.
	I0425 19:27:01.449435   43102 command_runner.go:130] > enable_metrics = true
	I0425 19:27:01.449440   43102 command_runner.go:130] > # Specify enabled metrics collectors.
	I0425 19:27:01.449447   43102 command_runner.go:130] > # Per default all metrics are enabled.
	I0425 19:27:01.449452   43102 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0425 19:27:01.449460   43102 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0425 19:27:01.449465   43102 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0425 19:27:01.449471   43102 command_runner.go:130] > # metrics_collectors = [
	I0425 19:27:01.449475   43102 command_runner.go:130] > # 	"operations",
	I0425 19:27:01.449480   43102 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0425 19:27:01.449487   43102 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0425 19:27:01.449491   43102 command_runner.go:130] > # 	"operations_errors",
	I0425 19:27:01.449495   43102 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0425 19:27:01.449499   43102 command_runner.go:130] > # 	"image_pulls_by_name",
	I0425 19:27:01.449504   43102 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0425 19:27:01.449508   43102 command_runner.go:130] > # 	"image_pulls_failures",
	I0425 19:27:01.449512   43102 command_runner.go:130] > # 	"image_pulls_successes",
	I0425 19:27:01.449519   43102 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0425 19:27:01.449522   43102 command_runner.go:130] > # 	"image_layer_reuse",
	I0425 19:27:01.449527   43102 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0425 19:27:01.449536   43102 command_runner.go:130] > # 	"containers_oom_total",
	I0425 19:27:01.449540   43102 command_runner.go:130] > # 	"containers_oom",
	I0425 19:27:01.449546   43102 command_runner.go:130] > # 	"processes_defunct",
	I0425 19:27:01.449550   43102 command_runner.go:130] > # 	"operations_total",
	I0425 19:27:01.449554   43102 command_runner.go:130] > # 	"operations_latency_seconds",
	I0425 19:27:01.449560   43102 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0425 19:27:01.449564   43102 command_runner.go:130] > # 	"operations_errors_total",
	I0425 19:27:01.449570   43102 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0425 19:27:01.449574   43102 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0425 19:27:01.449579   43102 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0425 19:27:01.449583   43102 command_runner.go:130] > # 	"image_pulls_success_total",
	I0425 19:27:01.449587   43102 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0425 19:27:01.449594   43102 command_runner.go:130] > # 	"containers_oom_count_total",
	I0425 19:27:01.449599   43102 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0425 19:27:01.449605   43102 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0425 19:27:01.449608   43102 command_runner.go:130] > # ]
	I0425 19:27:01.449615   43102 command_runner.go:130] > # The port on which the metrics server will listen.
	I0425 19:27:01.449618   43102 command_runner.go:130] > # metrics_port = 9090
	I0425 19:27:01.449623   43102 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0425 19:27:01.449629   43102 command_runner.go:130] > # metrics_socket = ""
	I0425 19:27:01.449634   43102 command_runner.go:130] > # The certificate for the secure metrics server.
	I0425 19:27:01.449642   43102 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0425 19:27:01.449650   43102 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0425 19:27:01.449657   43102 command_runner.go:130] > # certificate on any modification event.
	I0425 19:27:01.449660   43102 command_runner.go:130] > # metrics_cert = ""
	I0425 19:27:01.449671   43102 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0425 19:27:01.449677   43102 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0425 19:27:01.449681   43102 command_runner.go:130] > # metrics_key = ""
	I0425 19:27:01.449687   43102 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0425 19:27:01.449691   43102 command_runner.go:130] > [crio.tracing]
	I0425 19:27:01.449697   43102 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0425 19:27:01.449703   43102 command_runner.go:130] > # enable_tracing = false
	I0425 19:27:01.449708   43102 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0425 19:27:01.449713   43102 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0425 19:27:01.449722   43102 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0425 19:27:01.449731   43102 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0425 19:27:01.449738   43102 command_runner.go:130] > # CRI-O NRI configuration.
	I0425 19:27:01.449746   43102 command_runner.go:130] > [crio.nri]
	I0425 19:27:01.449753   43102 command_runner.go:130] > # Globally enable or disable NRI.
	I0425 19:27:01.449761   43102 command_runner.go:130] > # enable_nri = false
	I0425 19:27:01.449768   43102 command_runner.go:130] > # NRI socket to listen on.
	I0425 19:27:01.449778   43102 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0425 19:27:01.449785   43102 command_runner.go:130] > # NRI plugin directory to use.
	I0425 19:27:01.449795   43102 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0425 19:27:01.449811   43102 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0425 19:27:01.449821   43102 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0425 19:27:01.449829   43102 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0425 19:27:01.449834   43102 command_runner.go:130] > # nri_disable_connections = false
	I0425 19:27:01.449839   43102 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0425 19:27:01.449846   43102 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0425 19:27:01.449852   43102 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0425 19:27:01.449858   43102 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0425 19:27:01.449864   43102 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0425 19:27:01.449870   43102 command_runner.go:130] > [crio.stats]
	I0425 19:27:01.449876   43102 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0425 19:27:01.449883   43102 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0425 19:27:01.449887   43102 command_runner.go:130] > # stats_collection_period = 0
	I0425 19:27:01.450485   43102 command_runner.go:130] ! time="2024-04-25 19:27:01.415007525Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0425 19:27:01.450507   43102 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0425 19:27:01.450721   43102 cni.go:84] Creating CNI manager for ""
	I0425 19:27:01.450740   43102 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0425 19:27:01.450750   43102 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 19:27:01.450777   43102 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.194 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-857482 NodeName:multinode-857482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 19:27:01.450932   43102 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-857482"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 19:27:01.450997   43102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 19:27:01.462219   43102 command_runner.go:130] > kubeadm
	I0425 19:27:01.462246   43102 command_runner.go:130] > kubectl
	I0425 19:27:01.462252   43102 command_runner.go:130] > kubelet
	I0425 19:27:01.462277   43102 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 19:27:01.462329   43102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 19:27:01.472827   43102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0425 19:27:01.492542   43102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 19:27:01.511855   43102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0425 19:27:01.531332   43102 ssh_runner.go:195] Run: grep 192.168.39.194	control-plane.minikube.internal$ /etc/hosts
	I0425 19:27:01.535938   43102 command_runner.go:130] > 192.168.39.194	control-plane.minikube.internal
	I0425 19:27:01.536002   43102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:27:01.684746   43102 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:27:01.701274   43102 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482 for IP: 192.168.39.194
	I0425 19:27:01.701301   43102 certs.go:194] generating shared ca certs ...
	I0425 19:27:01.701328   43102 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:27:01.701508   43102 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 19:27:01.701551   43102 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 19:27:01.701561   43102 certs.go:256] generating profile certs ...
	I0425 19:27:01.701630   43102 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/client.key
	I0425 19:27:01.701687   43102 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/apiserver.key.8dbc5944
	I0425 19:27:01.701719   43102 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/proxy-client.key
	I0425 19:27:01.701729   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 19:27:01.701767   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 19:27:01.701787   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 19:27:01.701808   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 19:27:01.701828   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 19:27:01.701846   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 19:27:01.701866   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 19:27:01.701879   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 19:27:01.701929   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 19:27:01.701964   43102 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 19:27:01.701974   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 19:27:01.701997   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 19:27:01.702019   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 19:27:01.702039   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 19:27:01.702074   43102 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:27:01.702098   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 19:27:01.702111   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 19:27:01.702123   43102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:27:01.702668   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 19:27:01.729793   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 19:27:01.755044   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 19:27:01.782051   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 19:27:01.808248   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 19:27:01.833204   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 19:27:01.858215   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 19:27:01.883732   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/multinode-857482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 19:27:01.908798   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 19:27:01.934362   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 19:27:01.961204   43102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 19:27:01.988221   43102 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 19:27:02.006712   43102 ssh_runner.go:195] Run: openssl version
	I0425 19:27:02.013131   43102 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0425 19:27:02.013201   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 19:27:02.026190   43102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 19:27:02.031165   43102 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 19:27:02.031229   43102 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 19:27:02.031268   43102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 19:27:02.037172   43102 command_runner.go:130] > 51391683
	I0425 19:27:02.037227   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 19:27:02.047791   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 19:27:02.059885   43102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 19:27:02.064580   43102 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:27:02.064633   43102 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:27:02.064674   43102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 19:27:02.070670   43102 command_runner.go:130] > 3ec20f2e
	I0425 19:27:02.070718   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 19:27:02.081337   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 19:27:02.093634   43102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:27:02.098346   43102 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:27:02.098440   43102 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:27:02.098487   43102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:27:02.104530   43102 command_runner.go:130] > b5213941
	I0425 19:27:02.104589   43102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 19:27:02.115119   43102 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 19:27:02.120054   43102 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 19:27:02.120076   43102 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0425 19:27:02.120093   43102 command_runner.go:130] > Device: 253,1	Inode: 7339542     Links: 1
	I0425 19:27:02.120104   43102 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0425 19:27:02.120116   43102 command_runner.go:130] > Access: 2024-04-25 19:20:31.246400190 +0000
	I0425 19:27:02.120121   43102 command_runner.go:130] > Modify: 2024-04-25 19:20:31.246400190 +0000
	I0425 19:27:02.120127   43102 command_runner.go:130] > Change: 2024-04-25 19:20:31.246400190 +0000
	I0425 19:27:02.120134   43102 command_runner.go:130] >  Birth: 2024-04-25 19:20:31.246400190 +0000
	I0425 19:27:02.120179   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 19:27:02.126261   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.126314   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 19:27:02.132054   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.132221   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 19:27:02.137928   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.138252   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 19:27:02.144161   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.144192   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 19:27:02.149926   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.149966   43102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 19:27:02.155689   43102 command_runner.go:130] > Certificate will not expire
	I0425 19:27:02.155774   43102 kubeadm.go:391] StartCluster: {Name:multinode-857482 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-857482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.194 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.172 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.135 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:27:02.155893   43102 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 19:27:02.155937   43102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 19:27:02.200225   43102 command_runner.go:130] > 90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1
	I0425 19:27:02.200250   43102 command_runner.go:130] > 45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4
	I0425 19:27:02.200255   43102 command_runner.go:130] > e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653
	I0425 19:27:02.200262   43102 command_runner.go:130] > ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53
	I0425 19:27:02.200267   43102 command_runner.go:130] > a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd
	I0425 19:27:02.200272   43102 command_runner.go:130] > 50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a
	I0425 19:27:02.200277   43102 command_runner.go:130] > 843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e
	I0425 19:27:02.200284   43102 command_runner.go:130] > 374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95
	I0425 19:27:02.200306   43102 cri.go:89] found id: "90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1"
	I0425 19:27:02.200321   43102 cri.go:89] found id: "45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4"
	I0425 19:27:02.200326   43102 cri.go:89] found id: "e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653"
	I0425 19:27:02.200331   43102 cri.go:89] found id: "ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53"
	I0425 19:27:02.200335   43102 cri.go:89] found id: "a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd"
	I0425 19:27:02.200342   43102 cri.go:89] found id: "50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a"
	I0425 19:27:02.200346   43102 cri.go:89] found id: "843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e"
	I0425 19:27:02.200354   43102 cri.go:89] found id: "374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95"
	I0425 19:27:02.200358   43102 cri.go:89] found id: ""
	I0425 19:27:02.200395   43102 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.014193284Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714073452014166689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3301e440-5023-49c4-8d59-c3b4ae912eef name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.014970452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17411afe-f09c-4906-ab39-71f63c4f1b49 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.015061077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17411afe-f09c-4906-ab39-71f63c4f1b49 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.015399176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36b5ce47353cc0c96dc8b5e9a33afd7fb38b5fbeabb96502d852b56825a6cb3d,PodSandboxId:dd50b400fdc3b7ad73cd4c60d7cd079e8fd50dea262c8891ed8fd5c3f1024876,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714073263074706219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cf38fb5a61ccd685210f49586b1838e03e6a8f24e5dd7f90b212b82b98e2f7,PodSandboxId:e6803b765d61e1ba7f49b06b2427aff8568e5224b90549e5b0cb6cadf7a8db44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714073229612261901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d831b8602d86212ca0ef86f76b2a589993ec24594da3f7e115dd7e98fe29fb7,PodSandboxId:48c24d8fe93ade998923e73bcfadaa746d58973624039eb5a4f47fb4c33dbcab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714073229471756466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e36a3005313fbfb55de3fd1442a04ed10cb79094f1a654508afe8d0485ba41,PodSandboxId:0801a5c3e1bad99484d6fc95a29ce72432691a446dbfa2192853f084911be965,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714073229429973720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},An
notations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0f0bec0bfe8c519abee73f140e20eaa46e228490f690075a8a9ea1d0832b2e,PodSandboxId:92d31c8b03469b443faa23fa66606c74548c2f21fc9bf70f79d1a9cb7048c9bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714073229337173054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30-4abec6ea5602,},Annotations:map[string]string{io.ku
bernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a74a0d18b4fffdefa91ae8f49e5820ac816df60ff4fa7be1478ec7b6c96adac,PodSandboxId:83ea89064e01f551e77ddd34e4a5bfa50de0b78bf65e908b300f50ad1ee8f212,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714073224543468174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd00ac37f2b0620a23ff4777f7b8c8f3bd22582d2b0680cc199db51c8d2a2ebe,PodSandboxId:558bff1fcd05bf116d3f65453c9b929ce7b50127cdc80b192983ce8c83f3f9d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714073224505108779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io.kub
ernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fc7eb8c6052e494d92460d7cc331abd9063f24fc7baece9b50aa5349942750,PodSandboxId:9b32c84c32b51e23ce062771f8a8029b513f58e6bb4f05d192ae7c1b198888a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714073224499323025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1ec686ad9a7a298de9bc3ed357f9487b2134662a8b2377fc4b4c63d25701f2,PodSandboxId:814a5b64618a0f32875cce5e29b1a6c717454aa0f3051bcdffb2f27aa2f64d42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714073224389365563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.container.hash: c16a365d,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa56fafcb469dc3396bc085681aa0b6058fdca2f63fda74d7ce625ee56d7b228,PodSandboxId:26e22cc5d185c36ad51b61d52b3a92341a5345bb64ca7086bd1e36f3ca3a65a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714072906141442748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1,PodSandboxId:42269422724f8c49b8316f09efb7256089d87cf294a3d91a8fec646997201ec0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714072857917492767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},Annotations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4,PodSandboxId:c02b4009f3b7819c289e15a9b9634da93780d6f19507bcaa63086c422bf3d779,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714072856649372910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653,PodSandboxId:b7b0f9cfcb0ca0e91e6f3fdddcc5294f2c8623b3fdf7e8f469bc6527ac80ba1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714072854916856256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53,PodSandboxId:88c72015e2cbcb088b9378b418e2b4a8271565f15dbd9426bfcb9c699aed8474,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714072854684172654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30
-4abec6ea5602,},Annotations:map[string]string{io.kubernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd,PodSandboxId:5c59a402a7f40ed8e0574c71e5b2687615ed5b1f218712a1a6e052fa14cc6169,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714072834905751142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a,PodSandboxId:22aa80497e4ffc633cd6fc08d1710f481bcb900a5cac34f13ccce495c06874c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714072834870358706,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.
container.hash: c16a365d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e,PodSandboxId:6c5aba426835e831b909ab93f25b0a25b037ff565dc6fe62ea410d1cee46c1ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714072834823453769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95,PodSandboxId:51b56702f2b0bc5e3e0d647c3647512e86f57eb259b19391739deb2056df9d20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714072834771018602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17411afe-f09c-4906-ab39-71f63c4f1b49 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.065577081Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09a1062a-f051-4478-b861-b0d5b6c0e529 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.065782056Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09a1062a-f051-4478-b861-b0d5b6c0e529 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.067298009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f18f4dd5-c8a4-450a-b0fe-942df63cc79b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.067933803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714073452067905610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f18f4dd5-c8a4-450a-b0fe-942df63cc79b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.068514581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d37b5173-aa4e-43b2-b1c2-5cc4f78d5e17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.068598288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d37b5173-aa4e-43b2-b1c2-5cc4f78d5e17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.069053105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36b5ce47353cc0c96dc8b5e9a33afd7fb38b5fbeabb96502d852b56825a6cb3d,PodSandboxId:dd50b400fdc3b7ad73cd4c60d7cd079e8fd50dea262c8891ed8fd5c3f1024876,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714073263074706219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cf38fb5a61ccd685210f49586b1838e03e6a8f24e5dd7f90b212b82b98e2f7,PodSandboxId:e6803b765d61e1ba7f49b06b2427aff8568e5224b90549e5b0cb6cadf7a8db44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714073229612261901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d831b8602d86212ca0ef86f76b2a589993ec24594da3f7e115dd7e98fe29fb7,PodSandboxId:48c24d8fe93ade998923e73bcfadaa746d58973624039eb5a4f47fb4c33dbcab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714073229471756466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e36a3005313fbfb55de3fd1442a04ed10cb79094f1a654508afe8d0485ba41,PodSandboxId:0801a5c3e1bad99484d6fc95a29ce72432691a446dbfa2192853f084911be965,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714073229429973720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},An
notations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0f0bec0bfe8c519abee73f140e20eaa46e228490f690075a8a9ea1d0832b2e,PodSandboxId:92d31c8b03469b443faa23fa66606c74548c2f21fc9bf70f79d1a9cb7048c9bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714073229337173054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30-4abec6ea5602,},Annotations:map[string]string{io.ku
bernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a74a0d18b4fffdefa91ae8f49e5820ac816df60ff4fa7be1478ec7b6c96adac,PodSandboxId:83ea89064e01f551e77ddd34e4a5bfa50de0b78bf65e908b300f50ad1ee8f212,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714073224543468174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd00ac37f2b0620a23ff4777f7b8c8f3bd22582d2b0680cc199db51c8d2a2ebe,PodSandboxId:558bff1fcd05bf116d3f65453c9b929ce7b50127cdc80b192983ce8c83f3f9d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714073224505108779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io.kub
ernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fc7eb8c6052e494d92460d7cc331abd9063f24fc7baece9b50aa5349942750,PodSandboxId:9b32c84c32b51e23ce062771f8a8029b513f58e6bb4f05d192ae7c1b198888a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714073224499323025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1ec686ad9a7a298de9bc3ed357f9487b2134662a8b2377fc4b4c63d25701f2,PodSandboxId:814a5b64618a0f32875cce5e29b1a6c717454aa0f3051bcdffb2f27aa2f64d42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714073224389365563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.container.hash: c16a365d,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa56fafcb469dc3396bc085681aa0b6058fdca2f63fda74d7ce625ee56d7b228,PodSandboxId:26e22cc5d185c36ad51b61d52b3a92341a5345bb64ca7086bd1e36f3ca3a65a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714072906141442748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1,PodSandboxId:42269422724f8c49b8316f09efb7256089d87cf294a3d91a8fec646997201ec0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714072857917492767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},Annotations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4,PodSandboxId:c02b4009f3b7819c289e15a9b9634da93780d6f19507bcaa63086c422bf3d779,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714072856649372910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653,PodSandboxId:b7b0f9cfcb0ca0e91e6f3fdddcc5294f2c8623b3fdf7e8f469bc6527ac80ba1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714072854916856256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53,PodSandboxId:88c72015e2cbcb088b9378b418e2b4a8271565f15dbd9426bfcb9c699aed8474,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714072854684172654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30
-4abec6ea5602,},Annotations:map[string]string{io.kubernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd,PodSandboxId:5c59a402a7f40ed8e0574c71e5b2687615ed5b1f218712a1a6e052fa14cc6169,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714072834905751142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a,PodSandboxId:22aa80497e4ffc633cd6fc08d1710f481bcb900a5cac34f13ccce495c06874c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714072834870358706,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.
container.hash: c16a365d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e,PodSandboxId:6c5aba426835e831b909ab93f25b0a25b037ff565dc6fe62ea410d1cee46c1ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714072834823453769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95,PodSandboxId:51b56702f2b0bc5e3e0d647c3647512e86f57eb259b19391739deb2056df9d20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714072834771018602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d37b5173-aa4e-43b2-b1c2-5cc4f78d5e17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.118219576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a3e6921-9420-42e1-b39e-a04b2d57046a name=/runtime.v1.RuntimeService/Version
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.118316051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a3e6921-9420-42e1-b39e-a04b2d57046a name=/runtime.v1.RuntimeService/Version
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.120018354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c4e80fe-77d9-4b92-a955-872a8775bbb8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.120442933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714073452120418826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c4e80fe-77d9-4b92-a955-872a8775bbb8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.121497086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cb58c78-b463-4635-9c80-af1198b91245 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.121555484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cb58c78-b463-4635-9c80-af1198b91245 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.121977908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36b5ce47353cc0c96dc8b5e9a33afd7fb38b5fbeabb96502d852b56825a6cb3d,PodSandboxId:dd50b400fdc3b7ad73cd4c60d7cd079e8fd50dea262c8891ed8fd5c3f1024876,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714073263074706219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cf38fb5a61ccd685210f49586b1838e03e6a8f24e5dd7f90b212b82b98e2f7,PodSandboxId:e6803b765d61e1ba7f49b06b2427aff8568e5224b90549e5b0cb6cadf7a8db44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714073229612261901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d831b8602d86212ca0ef86f76b2a589993ec24594da3f7e115dd7e98fe29fb7,PodSandboxId:48c24d8fe93ade998923e73bcfadaa746d58973624039eb5a4f47fb4c33dbcab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714073229471756466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e36a3005313fbfb55de3fd1442a04ed10cb79094f1a654508afe8d0485ba41,PodSandboxId:0801a5c3e1bad99484d6fc95a29ce72432691a446dbfa2192853f084911be965,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714073229429973720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},An
notations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0f0bec0bfe8c519abee73f140e20eaa46e228490f690075a8a9ea1d0832b2e,PodSandboxId:92d31c8b03469b443faa23fa66606c74548c2f21fc9bf70f79d1a9cb7048c9bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714073229337173054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30-4abec6ea5602,},Annotations:map[string]string{io.ku
bernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a74a0d18b4fffdefa91ae8f49e5820ac816df60ff4fa7be1478ec7b6c96adac,PodSandboxId:83ea89064e01f551e77ddd34e4a5bfa50de0b78bf65e908b300f50ad1ee8f212,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714073224543468174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd00ac37f2b0620a23ff4777f7b8c8f3bd22582d2b0680cc199db51c8d2a2ebe,PodSandboxId:558bff1fcd05bf116d3f65453c9b929ce7b50127cdc80b192983ce8c83f3f9d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714073224505108779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io.kub
ernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fc7eb8c6052e494d92460d7cc331abd9063f24fc7baece9b50aa5349942750,PodSandboxId:9b32c84c32b51e23ce062771f8a8029b513f58e6bb4f05d192ae7c1b198888a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714073224499323025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1ec686ad9a7a298de9bc3ed357f9487b2134662a8b2377fc4b4c63d25701f2,PodSandboxId:814a5b64618a0f32875cce5e29b1a6c717454aa0f3051bcdffb2f27aa2f64d42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714073224389365563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.container.hash: c16a365d,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa56fafcb469dc3396bc085681aa0b6058fdca2f63fda74d7ce625ee56d7b228,PodSandboxId:26e22cc5d185c36ad51b61d52b3a92341a5345bb64ca7086bd1e36f3ca3a65a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714072906141442748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1,PodSandboxId:42269422724f8c49b8316f09efb7256089d87cf294a3d91a8fec646997201ec0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714072857917492767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},Annotations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4,PodSandboxId:c02b4009f3b7819c289e15a9b9634da93780d6f19507bcaa63086c422bf3d779,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714072856649372910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653,PodSandboxId:b7b0f9cfcb0ca0e91e6f3fdddcc5294f2c8623b3fdf7e8f469bc6527ac80ba1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714072854916856256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53,PodSandboxId:88c72015e2cbcb088b9378b418e2b4a8271565f15dbd9426bfcb9c699aed8474,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714072854684172654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30
-4abec6ea5602,},Annotations:map[string]string{io.kubernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd,PodSandboxId:5c59a402a7f40ed8e0574c71e5b2687615ed5b1f218712a1a6e052fa14cc6169,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714072834905751142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a,PodSandboxId:22aa80497e4ffc633cd6fc08d1710f481bcb900a5cac34f13ccce495c06874c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714072834870358706,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.
container.hash: c16a365d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e,PodSandboxId:6c5aba426835e831b909ab93f25b0a25b037ff565dc6fe62ea410d1cee46c1ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714072834823453769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95,PodSandboxId:51b56702f2b0bc5e3e0d647c3647512e86f57eb259b19391739deb2056df9d20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714072834771018602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cb58c78-b463-4635-9c80-af1198b91245 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.172088078Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f236e403-b279-4a73-ae12-83433f860c64 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.172188589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f236e403-b279-4a73-ae12-83433f860c64 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.173903144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1177df4b-8aa3-4403-af1a-f31137172e91 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.174287726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714073452174268468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1177df4b-8aa3-4403-af1a-f31137172e91 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.175275522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81757205-6b47-40f6-8e96-a3edb0bddc67 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.175329300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81757205-6b47-40f6-8e96-a3edb0bddc67 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:30:52 multinode-857482 crio[2844]: time="2024-04-25 19:30:52.176999850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36b5ce47353cc0c96dc8b5e9a33afd7fb38b5fbeabb96502d852b56825a6cb3d,PodSandboxId:dd50b400fdc3b7ad73cd4c60d7cd079e8fd50dea262c8891ed8fd5c3f1024876,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714073263074706219,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57cf38fb5a61ccd685210f49586b1838e03e6a8f24e5dd7f90b212b82b98e2f7,PodSandboxId:e6803b765d61e1ba7f49b06b2427aff8568e5224b90549e5b0cb6cadf7a8db44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714073229612261901,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d831b8602d86212ca0ef86f76b2a589993ec24594da3f7e115dd7e98fe29fb7,PodSandboxId:48c24d8fe93ade998923e73bcfadaa746d58973624039eb5a4f47fb4c33dbcab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714073229471756466,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53e36a3005313fbfb55de3fd1442a04ed10cb79094f1a654508afe8d0485ba41,PodSandboxId:0801a5c3e1bad99484d6fc95a29ce72432691a446dbfa2192853f084911be965,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714073229429973720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},An
notations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0f0bec0bfe8c519abee73f140e20eaa46e228490f690075a8a9ea1d0832b2e,PodSandboxId:92d31c8b03469b443faa23fa66606c74548c2f21fc9bf70f79d1a9cb7048c9bf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714073229337173054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30-4abec6ea5602,},Annotations:map[string]string{io.ku
bernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a74a0d18b4fffdefa91ae8f49e5820ac816df60ff4fa7be1478ec7b6c96adac,PodSandboxId:83ea89064e01f551e77ddd34e4a5bfa50de0b78bf65e908b300f50ad1ee8f212,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714073224543468174,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd00ac37f2b0620a23ff4777f7b8c8f3bd22582d2b0680cc199db51c8d2a2ebe,PodSandboxId:558bff1fcd05bf116d3f65453c9b929ce7b50127cdc80b192983ce8c83f3f9d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714073224505108779,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io.kub
ernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fc7eb8c6052e494d92460d7cc331abd9063f24fc7baece9b50aa5349942750,PodSandboxId:9b32c84c32b51e23ce062771f8a8029b513f58e6bb4f05d192ae7c1b198888a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714073224499323025,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f56b21d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1ec686ad9a7a298de9bc3ed357f9487b2134662a8b2377fc4b4c63d25701f2,PodSandboxId:814a5b64618a0f32875cce5e29b1a6c717454aa0f3051bcdffb2f27aa2f64d42,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714073224389365563,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.container.hash: c16a365d,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa56fafcb469dc3396bc085681aa0b6058fdca2f63fda74d7ce625ee56d7b228,PodSandboxId:26e22cc5d185c36ad51b61d52b3a92341a5345bb64ca7086bd1e36f3ca3a65a7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714072906141442748,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-5nvcd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfab8c51-36de-44d5-859a-efe4f72047e7,},Annotations:map[string]string{io.kubernetes.container.hash: c7814b46,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f63f2641daef8eb7c5c508a66f96746d4792f790ff78bce5ee8eee6b93c9c1,PodSandboxId:42269422724f8c49b8316f09efb7256089d87cf294a3d91a8fec646997201ec0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714072857917492767,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a,},Annotations:map[string]string{io.kubernetes.container.hash: 3bdd1ea7,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4,PodSandboxId:c02b4009f3b7819c289e15a9b9634da93780d6f19507bcaa63086c422bf3d779,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714072856649372910,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jpgn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2,},Annotations:map[string]string{io.kubernetes.container.hash: a3ddf83f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653,PodSandboxId:b7b0f9cfcb0ca0e91e6f3fdddcc5294f2c8623b3fdf7e8f469bc6527ac80ba1c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714072854916856256,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-cslck,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6dda0d17-6ae1-40ac-9ed3-a272478b00e9,},Annotations:map[string]string{io.kubernetes.container.hash: 24a49d11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53,PodSandboxId:88c72015e2cbcb088b9378b418e2b4a8271565f15dbd9426bfcb9c699aed8474,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714072854684172654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r749w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88201317-c03e-4b73-9d30
-4abec6ea5602,},Annotations:map[string]string{io.kubernetes.container.hash: d318a5f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd,PodSandboxId:5c59a402a7f40ed8e0574c71e5b2687615ed5b1f218712a1a6e052fa14cc6169,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714072834905751142,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffb6f6ee3c3897f3a53515ac1d9fcd4f,}
,Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a,PodSandboxId:22aa80497e4ffc633cd6fc08d1710f481bcb900a5cac34f13ccce495c06874c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714072834870358706,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f041ad0bca6f7612bd8e32af5f02f27,},Annotations:map[string]string{io.kubernetes.
container.hash: c16a365d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e,PodSandboxId:6c5aba426835e831b909ab93f25b0a25b037ff565dc6fe62ea410d1cee46c1ff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714072834823453769,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bbe002fc4c8d624d17d33b50acbf921,},Annotations:map[string]string{io
.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95,PodSandboxId:51b56702f2b0bc5e3e0d647c3647512e86f57eb259b19391739deb2056df9d20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714072834771018602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-857482,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b80bf0b1584ee9efba0fe13cdfb8382,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: f56b21d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81757205-6b47-40f6-8e96-a3edb0bddc67 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	36b5ce47353cc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   dd50b400fdc3b       busybox-fc5497c4f-5nvcd
	57cf38fb5a61c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   e6803b765d61e       kindnet-cslck
	0d831b8602d86       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   48c24d8fe93ad       coredns-7db6d8ff4d-jpgn9
	53e36a3005313       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   0801a5c3e1bad       storage-provisioner
	7e0f0bec0bfe8       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago       Running             kube-proxy                1                   92d31c8b03469       kube-proxy-r749w
	8a74a0d18b4ff       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago       Running             kube-scheduler            1                   83ea89064e01f       kube-scheduler-multinode-857482
	dd00ac37f2b06       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago       Running             kube-controller-manager   1                   558bff1fcd05b       kube-controller-manager-multinode-857482
	b4fc7eb8c6052       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago       Running             kube-apiserver            1                   9b32c84c32b51       kube-apiserver-multinode-857482
	6b1ec686ad9a7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   814a5b64618a0       etcd-multinode-857482
	aa56fafcb469d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   26e22cc5d185c       busybox-fc5497c4f-5nvcd
	90f63f2641dae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   42269422724f8       storage-provisioner
	45abb60926ed3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   c02b4009f3b78       coredns-7db6d8ff4d-jpgn9
	e5e85ab7416e7       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   b7b0f9cfcb0ca       kindnet-cslck
	ef8755e344e04       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      9 minutes ago       Exited              kube-proxy                0                   88c72015e2cbc       kube-proxy-r749w
	a2e02984ebc2f       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      10 minutes ago      Exited              kube-scheduler            0                   5c59a402a7f40       kube-scheduler-multinode-857482
	50d52d4bddff3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   22aa80497e4ff       etcd-multinode-857482
	843f769af6424       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      10 minutes ago      Exited              kube-controller-manager   0                   6c5aba426835e       kube-controller-manager-multinode-857482
	374c5041b0427       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      10 minutes ago      Exited              kube-apiserver            0                   51b56702f2b0b       kube-apiserver-multinode-857482
	
	
	==> coredns [0d831b8602d86212ca0ef86f76b2a589993ec24594da3f7e115dd7e98fe29fb7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47317 - 48826 "HINFO IN 8766005205731033561.6680086199325924933. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021028555s
	
	
	==> coredns [45abb60926ed354937b786a3c838ba02b9cbe8e46439856f094b2ea2f098b5e4] <==
	[INFO] 10.244.1.2:41791 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002394988s
	[INFO] 10.244.1.2:44861 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125514s
	[INFO] 10.244.1.2:38916 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100885s
	[INFO] 10.244.1.2:59067 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001196938s
	[INFO] 10.244.1.2:44526 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245468s
	[INFO] 10.244.1.2:41212 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078702s
	[INFO] 10.244.1.2:48858 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077356s
	[INFO] 10.244.0.3:41411 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118264s
	[INFO] 10.244.0.3:58901 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079488s
	[INFO] 10.244.0.3:39547 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074073s
	[INFO] 10.244.0.3:33466 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151126s
	[INFO] 10.244.1.2:39324 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201953s
	[INFO] 10.244.1.2:48448 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115632s
	[INFO] 10.244.1.2:42885 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108855s
	[INFO] 10.244.1.2:55393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000101583s
	[INFO] 10.244.0.3:49668 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125129s
	[INFO] 10.244.0.3:58718 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000150779s
	[INFO] 10.244.0.3:59358 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138018s
	[INFO] 10.244.0.3:55736 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000094508s
	[INFO] 10.244.1.2:37993 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000253074s
	[INFO] 10.244.1.2:34336 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000144026s
	[INFO] 10.244.1.2:57786 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138683s
	[INFO] 10.244.1.2:55015 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00015061s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-857482
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-857482
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=multinode-857482
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T19_20_41_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:20:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-857482
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:30:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:27:07 +0000   Thu, 25 Apr 2024 19:20:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:27:07 +0000   Thu, 25 Apr 2024 19:20:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:27:07 +0000   Thu, 25 Apr 2024 19:20:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:27:07 +0000   Thu, 25 Apr 2024 19:20:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.194
	  Hostname:    multinode-857482
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0def079f20434cd5bbdfc4247f5577c0
	  System UUID:                0def079f-2043-4cd5-bbdf-c4247f5577c0
	  Boot ID:                    833f3010-465e-47f2-b2dd-9ef743d0be86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5nvcd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 coredns-7db6d8ff4d-jpgn9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m58s
	  kube-system                 etcd-multinode-857482                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-cslck                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m59s
	  kube-system                 kube-apiserver-multinode-857482             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-857482    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-r749w                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	  kube-system                 kube-scheduler-multinode-857482             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m57s                  kube-proxy       
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-857482 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-857482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-857482 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m59s                  node-controller  Node multinode-857482 event: Registered Node multinode-857482 in Controller
	  Normal  NodeReady                9m56s                  kubelet          Node multinode-857482 status is now: NodeReady
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet          Node multinode-857482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet          Node multinode-857482 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet          Node multinode-857482 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-857482 event: Registered Node multinode-857482 in Controller
	
	
	Name:               multinode-857482-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-857482-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=multinode-857482
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_25T19_27_45_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:27:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-857482-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:28:25 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 25 Apr 2024 19:28:16 +0000   Thu, 25 Apr 2024 19:29:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 25 Apr 2024 19:28:16 +0000   Thu, 25 Apr 2024 19:29:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 25 Apr 2024 19:28:16 +0000   Thu, 25 Apr 2024 19:29:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 25 Apr 2024 19:28:16 +0000   Thu, 25 Apr 2024 19:29:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    multinode-857482-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 add9791c83e34e71b7a9b00dc5ab31c1
	  System UUID:                add9791c-83e3-4e71-b7a9-b00dc5ab31c1
	  Boot ID:                    f79d32d3-488c-483e-8368-611ad5060b99
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-j5v9r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 kindnet-hqr9m              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m22s
	  kube-system                 kube-proxy-b9xv5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m2s                   kube-proxy       
	  Normal  Starting                 9m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m22s (x2 over 9m22s)  kubelet          Node multinode-857482-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x2 over 9m22s)  kubelet          Node multinode-857482-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x2 over 9m22s)  kubelet          Node multinode-857482-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m12s                  kubelet          Node multinode-857482-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m8s)    kubelet          Node multinode-857482-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m8s)    kubelet          Node multinode-857482-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m8s)    kubelet          Node multinode-857482-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m58s                  kubelet          Node multinode-857482-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-857482-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.069962] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.196750] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.136392] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.274846] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.761080] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.059696] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.903774] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +1.103525] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.473757] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.089108] kauditd_printk_skb: 25 callbacks suppressed
	[ +14.241934] systemd-fstab-generator[1507]: Ignoring "noauto" option for root device
	[  +0.028551] kauditd_printk_skb: 21 callbacks suppressed
	[Apr25 19:21] kauditd_printk_skb: 84 callbacks suppressed
	[Apr25 19:26] systemd-fstab-generator[2762]: Ignoring "noauto" option for root device
	[  +0.167255] systemd-fstab-generator[2774]: Ignoring "noauto" option for root device
	[  +0.189315] systemd-fstab-generator[2788]: Ignoring "noauto" option for root device
	[  +0.160235] systemd-fstab-generator[2800]: Ignoring "noauto" option for root device
	[  +0.310138] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[Apr25 19:27] systemd-fstab-generator[2928]: Ignoring "noauto" option for root device
	[  +0.084454] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.783396] systemd-fstab-generator[3054]: Ignoring "noauto" option for root device
	[  +5.788981] kauditd_printk_skb: 74 callbacks suppressed
	[  +9.953591] systemd-fstab-generator[3855]: Ignoring "noauto" option for root device
	[  +0.112702] kauditd_printk_skb: 32 callbacks suppressed
	[ +23.697165] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [50d52d4bddff3f1b68c4839cdee8cdbd4d387bd92f2bff4ec906d8bb1b0a0d8a] <==
	{"level":"info","ts":"2024-04-25T19:20:36.062927Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.194:2379"}
	{"level":"info","ts":"2024-04-25T19:20:36.069689Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T19:20:36.073783Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T19:20:36.113745Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:20:36.114109Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:20:36.114239Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-04-25T19:21:30.776938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.167009ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5517348346174703230 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-857482-m02.17c99c365cbc3dac\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-857482-m02.17c99c365cbc3dac\" value_size:642 lease:5517348346174702641 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-25T19:21:30.777296Z","caller":"traceutil/trace.go:171","msg":"trace[1124323218] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"253.232436ms","start":"2024-04-25T19:21:30.52405Z","end":"2024-04-25T19:21:30.777282Z","steps":["trace[1124323218] 'process raft request'  (duration: 69.090166ms)","trace[1124323218] 'compare'  (duration: 183.076233ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-25T19:21:30.777434Z","caller":"traceutil/trace.go:171","msg":"trace[1314848433] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"202.218678ms","start":"2024-04-25T19:21:30.575201Z","end":"2024-04-25T19:21:30.77742Z","steps":["trace[1314848433] 'process raft request'  (duration: 201.832445ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T19:22:19.211306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.410465ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5517348346174703623 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-857482-m03.17c99c41a4841b5a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-857482-m03.17c99c41a4841b5a\" value_size:640 lease:5517348346174703348 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-25T19:22:19.211689Z","caller":"traceutil/trace.go:171","msg":"trace[1989869227] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"230.250708ms","start":"2024-04-25T19:22:18.981339Z","end":"2024-04-25T19:22:19.21159Z","steps":["trace[1989869227] 'process raft request'  (duration: 64.395462ms)","trace[1989869227] 'compare'  (duration: 165.114085ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-25T19:22:19.211914Z","caller":"traceutil/trace.go:171","msg":"trace[1026513346] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"180.961245ms","start":"2024-04-25T19:22:19.030944Z","end":"2024-04-25T19:22:19.211905Z","steps":["trace[1026513346] 'process raft request'  (duration: 180.552677ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T19:22:23.35508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.022103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-25T19:22:23.355203Z","caller":"traceutil/trace.go:171","msg":"trace[2040449417] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:617; }","duration":"138.182198ms","start":"2024-04-25T19:22:23.217006Z","end":"2024-04-25T19:22:23.355188Z","steps":["trace[2040449417] 'count revisions from in-memory index tree'  (duration: 137.972707ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T19:22:23.355255Z","caller":"traceutil/trace.go:171","msg":"trace[911213496] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"119.396753ms","start":"2024-04-25T19:22:23.235846Z","end":"2024-04-25T19:22:23.355242Z","steps":["trace[911213496] 'process raft request'  (duration: 119.071977ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T19:25:19.352824Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-25T19:25:19.353001Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-857482","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"]}
	{"level":"warn","ts":"2024-04-25T19:25:19.353124Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.194:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-25T19:25:19.353158Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.194:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-25T19:25:19.353266Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-25T19:25:19.353324Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-25T19:25:19.424008Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b4bd7d4638784c91","current-leader-member-id":"b4bd7d4638784c91"}
	{"level":"info","ts":"2024-04-25T19:25:19.427098Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-25T19:25:19.4274Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-25T19:25:19.427453Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-857482","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"]}
	
	
	==> etcd [6b1ec686ad9a7a298de9bc3ed357f9487b2134662a8b2377fc4b4c63d25701f2] <==
	{"level":"info","ts":"2024-04-25T19:27:04.881363Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-25T19:27:04.886731Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-25T19:27:04.89298Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-25T19:27:04.893202Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b4bd7d4638784c91","initial-advertise-peer-urls":["https://192.168.39.194:2380"],"listen-peer-urls":["https://192.168.39.194:2380"],"advertise-client-urls":["https://192.168.39.194:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.194:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-25T19:27:04.893258Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-25T19:27:04.899758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 switched to configuration voters=(13023703437973933201)"}
	{"level":"info","ts":"2024-04-25T19:27:04.900122Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","added-peer-id":"b4bd7d4638784c91","added-peer-peer-urls":["https://192.168.39.194:2380"]}
	{"level":"info","ts":"2024-04-25T19:27:04.903277Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb2ce3d66f8fb721","local-member-id":"b4bd7d4638784c91","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:27:04.903512Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:27:04.900748Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-25T19:27:04.905799Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.194:2380"}
	{"level":"info","ts":"2024-04-25T19:27:06.353356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-25T19:27:06.353431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-25T19:27:06.353484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgPreVoteResp from b4bd7d4638784c91 at term 2"}
	{"level":"info","ts":"2024-04-25T19:27:06.353499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became candidate at term 3"}
	{"level":"info","ts":"2024-04-25T19:27:06.353505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 received MsgVoteResp from b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2024-04-25T19:27:06.353512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b4bd7d4638784c91 became leader at term 3"}
	{"level":"info","ts":"2024-04-25T19:27:06.353523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b4bd7d4638784c91 elected leader b4bd7d4638784c91 at term 3"}
	{"level":"info","ts":"2024-04-25T19:27:06.360181Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b4bd7d4638784c91","local-member-attributes":"{Name:multinode-857482 ClientURLs:[https://192.168.39.194:2379]}","request-path":"/0/members/b4bd7d4638784c91/attributes","cluster-id":"bb2ce3d66f8fb721","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T19:27:06.360397Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T19:27:06.36044Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T19:27:06.360389Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:27:06.360411Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:27:06.36259Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.194:2379"}
	{"level":"info","ts":"2024-04-25T19:27:06.363499Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:30:52 up 10 min,  0 users,  load average: 0.26, 0.22, 0.14
	Linux multinode-857482 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [57cf38fb5a61ccd685210f49586b1838e03e6a8f24e5dd7f90b212b82b98e2f7] <==
	I0425 19:29:50.852118       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:30:00.856520       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:30:00.856601       1 main.go:227] handling current node
	I0425 19:30:00.856694       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:30:00.856738       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:30:10.868344       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:30:10.868418       1 main.go:227] handling current node
	I0425 19:30:10.868533       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:30:10.868543       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:30:20.874010       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:30:20.874243       1 main.go:227] handling current node
	I0425 19:30:20.874277       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:30:20.874296       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:30:30.879419       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:30:30.879475       1 main.go:227] handling current node
	I0425 19:30:30.879491       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:30:30.879498       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:30:40.890980       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:30:40.891028       1 main.go:227] handling current node
	I0425 19:30:40.891040       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:30:40.891045       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:30:50.899185       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:30:50.899237       1 main.go:227] handling current node
	I0425 19:30:50.899247       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:30:50.899255       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e5e85ab7416e7b948664bf9fddfceab8fbd26029acd5c6c9f594094342858653] <==
	I0425 19:24:36.121503       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:24:46.135486       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:24:46.135531       1 main.go:227] handling current node
	I0425 19:24:46.135542       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:24:46.135548       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:24:46.135708       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:24:46.135715       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:24:56.140873       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:24:56.140934       1 main.go:227] handling current node
	I0425 19:24:56.140944       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:24:56.140950       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:24:56.141052       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:24:56.141087       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:25:06.153906       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:25:06.154009       1 main.go:227] handling current node
	I0425 19:25:06.154033       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:25:06.154052       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:25:06.154162       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:25:06.154181       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	I0425 19:25:16.160448       1 main.go:223] Handling node with IPs: map[192.168.39.194:{}]
	I0425 19:25:16.160553       1 main.go:227] handling current node
	I0425 19:25:16.160587       1 main.go:223] Handling node with IPs: map[192.168.39.172:{}]
	I0425 19:25:16.160612       1 main.go:250] Node multinode-857482-m02 has CIDR [10.244.1.0/24] 
	I0425 19:25:16.160969       1 main.go:223] Handling node with IPs: map[192.168.39.135:{}]
	I0425 19:25:16.161058       1 main.go:250] Node multinode-857482-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [374c5041b0427bbb507f1314dad9cd968ef8e87791cf735a8eaeacc1ad462c95] <==
	I0425 19:20:39.473601       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0425 19:20:39.516908       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0425 19:20:39.667342       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0425 19:20:39.675103       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.194]
	I0425 19:20:39.676102       1 controller.go:615] quota admission added evaluator for: endpoints
	I0425 19:20:39.683808       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0425 19:20:39.877012       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0425 19:20:40.501454       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0425 19:20:40.534106       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0425 19:20:40.556902       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0425 19:20:53.914157       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0425 19:20:53.963255       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0425 19:21:47.639266       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38376: use of closed network connection
	E0425 19:21:47.865469       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38404: use of closed network connection
	E0425 19:21:48.060936       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38422: use of closed network connection
	E0425 19:21:48.238495       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38434: use of closed network connection
	E0425 19:21:48.424092       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38448: use of closed network connection
	E0425 19:21:48.717517       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38476: use of closed network connection
	E0425 19:21:48.901238       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38488: use of closed network connection
	E0425 19:21:49.090264       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38514: use of closed network connection
	E0425 19:21:49.274224       1 conn.go:339] Error on socket receive: read tcp 192.168.39.194:8443->192.168.39.1:38534: use of closed network connection
	I0425 19:25:19.349794       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0425 19:25:19.374509       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0425 19:25:19.374587       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0425 19:25:19.381906       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [b4fc7eb8c6052e494d92460d7cc331abd9063f24fc7baece9b50aa5349942750] <==
	I0425 19:27:07.779189       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0425 19:27:07.779337       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0425 19:27:07.779398       1 shared_informer.go:320] Caches are synced for configmaps
	I0425 19:27:07.779470       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0425 19:27:07.779611       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0425 19:27:07.791996       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0425 19:27:07.793601       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0425 19:27:07.798326       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0425 19:27:07.798369       1 policy_source.go:224] refreshing policies
	I0425 19:27:07.798435       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0425 19:27:07.803368       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0425 19:27:07.808986       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0425 19:27:07.810065       1 aggregator.go:165] initial CRD sync complete...
	I0425 19:27:07.810114       1 autoregister_controller.go:141] Starting autoregister controller
	I0425 19:27:07.810122       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0425 19:27:07.810127       1 cache.go:39] Caches are synced for autoregister controller
	E0425 19:27:07.828902       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0425 19:27:08.689934       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0425 19:27:10.357159       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0425 19:27:10.514033       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0425 19:27:10.525216       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0425 19:27:10.599596       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0425 19:27:10.606335       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0425 19:27:20.538443       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0425 19:27:20.675368       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [843f769af6424666b87a62432a2cb68f18802cdddb39a7c6c61c6ed684d06b0e] <==
	I0425 19:21:30.781282       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-857482-m02\" does not exist"
	I0425 19:21:30.807797       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-857482-m02" podCIDRs=["10.244.1.0/24"]
	I0425 19:21:33.126105       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-857482-m02"
	I0425 19:21:40.670468       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:21:43.066368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.633057ms"
	I0425 19:21:43.107314       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.878822ms"
	I0425 19:21:43.107750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="257.083µs"
	I0425 19:21:43.108469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.778µs"
	I0425 19:21:46.606806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.78182ms"
	I0425 19:21:46.606927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.918µs"
	I0425 19:21:46.848153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.967758ms"
	I0425 19:21:46.849545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="216.224µs"
	I0425 19:22:19.217149       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:22:19.217405       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-857482-m03\" does not exist"
	I0425 19:22:19.241720       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-857482-m03" podCIDRs=["10.244.2.0/24"]
	I0425 19:22:23.146959       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-857482-m03"
	I0425 19:22:29.510793       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:23:01.928482       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:23:03.139409       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-857482-m03\" does not exist"
	I0425 19:23:03.141972       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:23:03.157988       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-857482-m03" podCIDRs=["10.244.3.0/24"]
	I0425 19:23:12.606492       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:23:58.217169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:23:58.264701       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.798909ms"
	I0425 19:23:58.264952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="113.063µs"
	
	
	==> kube-controller-manager [dd00ac37f2b0620a23ff4777f7b8c8f3bd22582d2b0680cc199db51c8d2a2ebe] <==
	I0425 19:27:45.064375       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-857482-m02" podCIDRs=["10.244.1.0/24"]
	I0425 19:27:46.957167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.396µs"
	I0425 19:27:46.988130       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.192µs"
	I0425 19:27:47.003159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.654µs"
	I0425 19:27:47.014015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.566µs"
	I0425 19:27:47.020067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.523µs"
	I0425 19:27:47.022869       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.879µs"
	I0425 19:27:51.627574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.481µs"
	I0425 19:27:54.704324       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:27:54.723098       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.84µs"
	I0425 19:27:54.737897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.585µs"
	I0425 19:27:58.326003       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.221672ms"
	I0425 19:27:58.326090       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.407µs"
	I0425 19:28:14.467559       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:28:15.568283       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:28:15.568420       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-857482-m03\" does not exist"
	I0425 19:28:15.590013       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-857482-m03" podCIDRs=["10.244.2.0/24"]
	I0425 19:28:25.178693       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:28:31.023512       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-857482-m02"
	I0425 19:29:10.694214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.723114ms"
	I0425 19:29:10.697591       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="225.08µs"
	I0425 19:29:20.469992       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-z7chs"
	I0425 19:29:20.495208       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-z7chs"
	I0425 19:29:20.495254       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-w9c48"
	I0425 19:29:20.515871       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-w9c48"
	
	
	==> kube-proxy [7e0f0bec0bfe8c519abee73f140e20eaa46e228490f690075a8a9ea1d0832b2e] <==
	I0425 19:27:09.654861       1 server_linux.go:69] "Using iptables proxy"
	I0425 19:27:09.669714       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.194"]
	I0425 19:27:09.901216       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:27:09.901289       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:27:09.901309       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:27:09.930816       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:27:09.931036       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:27:09.931088       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:27:09.938177       1 config.go:192] "Starting service config controller"
	I0425 19:27:09.938214       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:27:09.938239       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:27:09.938243       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:27:09.938711       1 config.go:319] "Starting node config controller"
	I0425 19:27:09.938721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:27:10.039289       1 shared_informer.go:320] Caches are synced for node config
	I0425 19:27:10.039892       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:27:10.041735       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [ef8755e344e044342af2911efd69ca957af86350628bb2e18c2bd746cedfaa53] <==
	I0425 19:20:54.929221       1 server_linux.go:69] "Using iptables proxy"
	I0425 19:20:54.940773       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.194"]
	I0425 19:20:55.057935       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:20:55.058006       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:20:55.058025       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:20:55.066168       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:20:55.066428       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:20:55.066440       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:20:55.086570       1 config.go:192] "Starting service config controller"
	I0425 19:20:55.087090       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:20:55.087539       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:20:55.087573       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:20:55.099466       1 config.go:319] "Starting node config controller"
	I0425 19:20:55.099505       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:20:55.188718       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:20:55.188789       1 shared_informer.go:320] Caches are synced for service config
	I0425 19:20:55.199553       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8a74a0d18b4fffdefa91ae8f49e5820ac816df60ff4fa7be1478ec7b6c96adac] <==
	I0425 19:27:05.886610       1 serving.go:380] Generated self-signed cert in-memory
	W0425 19:27:07.725222       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0425 19:27:07.725343       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 19:27:07.725379       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0425 19:27:07.725461       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0425 19:27:07.814515       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0425 19:27:07.814571       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:27:07.819158       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0425 19:27:07.819390       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0425 19:27:07.819401       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0425 19:27:07.819414       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0425 19:27:07.920366       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a2e02984ebc2fa9d61323fe33421d67e0a537fd450913c0ca6ea42f702296ccd] <==
	E0425 19:20:38.707064       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 19:20:38.756311       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 19:20:38.756368       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 19:20:38.797533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:20:38.797589       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0425 19:20:39.004314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0425 19:20:39.004375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0425 19:20:39.052087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 19:20:39.052883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0425 19:20:39.099845       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 19:20:39.099899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0425 19:20:39.125196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 19:20:39.125257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 19:20:39.183358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 19:20:39.183418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 19:20:39.221806       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 19:20:39.221943       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 19:20:39.233258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 19:20:39.233286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0425 19:20:39.239718       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 19:20:39.240834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 19:20:39.255453       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0425 19:20:39.255797       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0425 19:20:41.070246       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0425 19:25:19.351292       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.707876    3061 topology_manager.go:215] "Topology Admit Handler" podUID="6e2f5902-0e4a-4260-90e2-0b5b2fa73ae2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jpgn9"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.707926    3061 topology_manager.go:215] "Topology Admit Handler" podUID="3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a" podNamespace="kube-system" podName="storage-provisioner"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.707981    3061 topology_manager.go:215] "Topology Admit Handler" podUID="bfab8c51-36de-44d5-859a-efe4f72047e7" podNamespace="default" podName="busybox-fc5497c4f-5nvcd"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.727885    3061 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.813379    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88201317-c03e-4b73-9d30-4abec6ea5602-lib-modules\") pod \"kube-proxy-r749w\" (UID: \"88201317-c03e-4b73-9d30-4abec6ea5602\") " pod="kube-system/kube-proxy-r749w"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.813606    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88201317-c03e-4b73-9d30-4abec6ea5602-xtables-lock\") pod \"kube-proxy-r749w\" (UID: \"88201317-c03e-4b73-9d30-4abec6ea5602\") " pod="kube-system/kube-proxy-r749w"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.814523    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6dda0d17-6ae1-40ac-9ed3-a272478b00e9-cni-cfg\") pod \"kindnet-cslck\" (UID: \"6dda0d17-6ae1-40ac-9ed3-a272478b00e9\") " pod="kube-system/kindnet-cslck"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.814753    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dda0d17-6ae1-40ac-9ed3-a272478b00e9-xtables-lock\") pod \"kindnet-cslck\" (UID: \"6dda0d17-6ae1-40ac-9ed3-a272478b00e9\") " pod="kube-system/kindnet-cslck"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.814884    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a-tmp\") pod \"storage-provisioner\" (UID: \"3a9ec463-4b04-4a7e-8f7a-b8bf11cee10a\") " pod="kube-system/storage-provisioner"
	Apr 25 19:27:08 multinode-857482 kubelet[3061]: I0425 19:27:08.815791    3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dda0d17-6ae1-40ac-9ed3-a272478b00e9-lib-modules\") pod \"kindnet-cslck\" (UID: \"6dda0d17-6ae1-40ac-9ed3-a272478b00e9\") " pod="kube-system/kindnet-cslck"
	Apr 25 19:28:03 multinode-857482 kubelet[3061]: E0425 19:28:03.804538    3061 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:28:03 multinode-857482 kubelet[3061]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:28:03 multinode-857482 kubelet[3061]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:28:03 multinode-857482 kubelet[3061]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:28:03 multinode-857482 kubelet[3061]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 19:29:03 multinode-857482 kubelet[3061]: E0425 19:29:03.807864    3061 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:29:03 multinode-857482 kubelet[3061]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:29:03 multinode-857482 kubelet[3061]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:29:03 multinode-857482 kubelet[3061]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:29:03 multinode-857482 kubelet[3061]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 19:30:03 multinode-857482 kubelet[3061]: E0425 19:30:03.804321    3061 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 19:30:03 multinode-857482 kubelet[3061]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 19:30:03 multinode-857482 kubelet[3061]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 19:30:03 multinode-857482 kubelet[3061]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 19:30:03 multinode-857482 kubelet[3061]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:30:51.707610   45035 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18757-6355/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-857482 -n multinode-857482
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-857482 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.52s)

                                                
                                    
x
+
TestPreload (351.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-286616 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0425 19:35:28.489956   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 19:35:45.438350   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-286616 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m28.988358368s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-286616 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-286616 image pull gcr.io/k8s-minikube/busybox: (2.942285532s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-286616
E0425 19:38:36.328697   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-286616: exit status 82 (2m0.481483796s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-286616"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-286616 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-04-25 19:40:07.545802654 +0000 UTC m=+4133.954737203
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-286616 -n test-preload-286616
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-286616 -n test-preload-286616: exit status 3 (18.516794511s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:40:26.058534   48185 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host
	E0425 19:40:26.058556   48185 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.126:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-286616" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-286616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-286616
--- FAIL: TestPreload (351.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (377.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-215221 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0425 19:45:45.439337   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-215221 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m37.678901188s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-215221] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18757
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-215221" primary control-plane node in "kubernetes-upgrade-215221" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 19:45:44.149674   54245 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:45:44.149777   54245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:45:44.149789   54245 out.go:304] Setting ErrFile to fd 2...
	I0425 19:45:44.149795   54245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:45:44.150000   54245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:45:44.150601   54245 out.go:298] Setting JSON to false
	I0425 19:45:44.151483   54245 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5290,"bootTime":1714069054,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:45:44.151545   54245 start.go:139] virtualization: kvm guest
	I0425 19:45:44.153930   54245 out.go:177] * [kubernetes-upgrade-215221] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:45:44.155842   54245 notify.go:220] Checking for updates...
	I0425 19:45:44.155852   54245 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:45:44.157188   54245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:45:44.158661   54245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:45:44.159964   54245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:45:44.161268   54245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:45:44.162695   54245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:45:44.164555   54245 config.go:182] Loaded profile config "NoKubernetes-335371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0425 19:45:44.164640   54245 config.go:182] Loaded profile config "cert-expiration-571974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:45:44.164751   54245 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:45:44.201358   54245 out.go:177] * Using the kvm2 driver based on user configuration
	I0425 19:45:44.202809   54245 start.go:297] selected driver: kvm2
	I0425 19:45:44.202820   54245 start.go:901] validating driver "kvm2" against <nil>
	I0425 19:45:44.202838   54245 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:45:44.203475   54245 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:45:44.203558   54245 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:45:44.221144   54245 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:45:44.221228   54245 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 19:45:44.221450   54245 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0425 19:45:44.221507   54245 cni.go:84] Creating CNI manager for ""
	I0425 19:45:44.221521   54245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:45:44.221528   54245 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 19:45:44.221601   54245 start.go:340] cluster config:
	{Name:kubernetes-upgrade-215221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-215221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:45:44.221692   54245 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:45:44.224591   54245 out.go:177] * Starting "kubernetes-upgrade-215221" primary control-plane node in "kubernetes-upgrade-215221" cluster
	I0425 19:45:44.226049   54245 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 19:45:44.226093   54245 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:45:44.226106   54245 cache.go:56] Caching tarball of preloaded images
	I0425 19:45:44.226196   54245 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:45:44.226230   54245 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0425 19:45:44.226332   54245 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/config.json ...
	I0425 19:45:44.226358   54245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/config.json: {Name:mkab5a57fd7ec9a694d18364dab02dd413025794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:44.226518   54245 start.go:360] acquireMachinesLock for kubernetes-upgrade-215221: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:45:46.779266   54245 start.go:364] duration metric: took 2.552704927s to acquireMachinesLock for "kubernetes-upgrade-215221"
	I0425 19:45:46.779348   54245 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-215221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-215221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 19:45:46.779451   54245 start.go:125] createHost starting for "" (driver="kvm2")
	I0425 19:45:46.781739   54245 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0425 19:45:46.781921   54245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:45:46.781971   54245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:45:46.797648   54245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41321
	I0425 19:45:46.797980   54245 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:45:46.798490   54245 main.go:141] libmachine: Using API Version  1
	I0425 19:45:46.798510   54245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:45:46.798875   54245 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:45:46.799044   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetMachineName
	I0425 19:45:46.799168   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .DriverName
	I0425 19:45:46.799344   54245 start.go:159] libmachine.API.Create for "kubernetes-upgrade-215221" (driver="kvm2")
	I0425 19:45:46.799373   54245 client.go:168] LocalClient.Create starting
	I0425 19:45:46.799403   54245 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 19:45:46.799809   54245 main.go:141] libmachine: Decoding PEM data...
	I0425 19:45:46.799868   54245 main.go:141] libmachine: Parsing certificate...
	I0425 19:45:46.799943   54245 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 19:45:46.799973   54245 main.go:141] libmachine: Decoding PEM data...
	I0425 19:45:46.799986   54245 main.go:141] libmachine: Parsing certificate...
	I0425 19:45:46.800021   54245 main.go:141] libmachine: Running pre-create checks...
	I0425 19:45:46.800031   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .PreCreateCheck
	I0425 19:45:46.801613   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetConfigRaw
	I0425 19:45:46.802131   54245 main.go:141] libmachine: Creating machine...
	I0425 19:45:46.802153   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .Create
	I0425 19:45:46.802311   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Creating KVM machine...
	I0425 19:45:46.803401   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found existing default KVM network
	I0425 19:45:46.804529   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:46.804361   54295 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fe:45:97} reservation:<nil>}
	I0425 19:45:46.806492   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:46.806389   54295 network.go:209] skipping subnet 192.168.50.0/24 that is reserved: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0425 19:45:46.807438   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:46.807369   54295 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015f00}
	I0425 19:45:46.807476   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | created network xml: 
	I0425 19:45:46.807496   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | <network>
	I0425 19:45:46.807508   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG |   <name>mk-kubernetes-upgrade-215221</name>
	I0425 19:45:46.807531   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG |   <dns enable='no'/>
	I0425 19:45:46.807545   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG |   
	I0425 19:45:46.807560   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0425 19:45:46.807568   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG |     <dhcp>
	I0425 19:45:46.807578   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0425 19:45:46.807586   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG |     </dhcp>
	I0425 19:45:46.807593   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG |   </ip>
	I0425 19:45:46.807601   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG |   
	I0425 19:45:46.807607   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | </network>
	I0425 19:45:46.807615   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | 
	I0425 19:45:46.813124   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | trying to create private KVM network mk-kubernetes-upgrade-215221 192.168.61.0/24...
	I0425 19:45:46.888080   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | private KVM network mk-kubernetes-upgrade-215221 192.168.61.0/24 created
	I0425 19:45:46.888315   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221 ...
	I0425 19:45:46.888340   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 19:45:46.888351   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:46.888281   54295 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:45:46.888501   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 19:45:47.118613   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:47.118505   54295 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221/id_rsa...
	I0425 19:45:47.384564   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:47.384436   54295 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221/kubernetes-upgrade-215221.rawdisk...
	I0425 19:45:47.384601   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Writing magic tar header
	I0425 19:45:47.384616   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Writing SSH key tar header
	I0425 19:45:47.384650   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:47.384559   54295 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221 ...
	I0425 19:45:47.384700   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221 (perms=drwx------)
	I0425 19:45:47.384733   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 19:45:47.384748   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221
	I0425 19:45:47.384773   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 19:45:47.384789   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:45:47.384801   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 19:45:47.384812   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 19:45:47.384828   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 19:45:47.384837   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 19:45:47.384845   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Checking permissions on dir: /home/jenkins
	I0425 19:45:47.384852   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Checking permissions on dir: /home
	I0425 19:45:47.384859   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Skipping /home - not owner
	I0425 19:45:47.384869   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 19:45:47.384875   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 19:45:47.384891   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Creating domain...
	I0425 19:45:47.385865   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) define libvirt domain using xml: 
	I0425 19:45:47.385888   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) <domain type='kvm'>
	I0425 19:45:47.385900   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   <name>kubernetes-upgrade-215221</name>
	I0425 19:45:47.385909   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   <memory unit='MiB'>2200</memory>
	I0425 19:45:47.385918   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   <vcpu>2</vcpu>
	I0425 19:45:47.385925   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   <features>
	I0425 19:45:47.385933   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <acpi/>
	I0425 19:45:47.385942   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <apic/>
	I0425 19:45:47.385951   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <pae/>
	I0425 19:45:47.385958   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     
	I0425 19:45:47.385967   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   </features>
	I0425 19:45:47.385975   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   <cpu mode='host-passthrough'>
	I0425 19:45:47.385983   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   
	I0425 19:45:47.385995   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   </cpu>
	I0425 19:45:47.386017   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   <os>
	I0425 19:45:47.386032   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <type>hvm</type>
	I0425 19:45:47.386043   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <boot dev='cdrom'/>
	I0425 19:45:47.386048   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <boot dev='hd'/>
	I0425 19:45:47.386054   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <bootmenu enable='no'/>
	I0425 19:45:47.386058   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   </os>
	I0425 19:45:47.386064   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   <devices>
	I0425 19:45:47.386069   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <disk type='file' device='cdrom'>
	I0425 19:45:47.386078   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221/boot2docker.iso'/>
	I0425 19:45:47.386088   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <target dev='hdc' bus='scsi'/>
	I0425 19:45:47.386093   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <readonly/>
	I0425 19:45:47.386098   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     </disk>
	I0425 19:45:47.386104   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <disk type='file' device='disk'>
	I0425 19:45:47.386110   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 19:45:47.386119   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221/kubernetes-upgrade-215221.rawdisk'/>
	I0425 19:45:47.386123   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <target dev='hda' bus='virtio'/>
	I0425 19:45:47.386128   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     </disk>
	I0425 19:45:47.386133   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <interface type='network'>
	I0425 19:45:47.386139   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <source network='mk-kubernetes-upgrade-215221'/>
	I0425 19:45:47.386143   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <model type='virtio'/>
	I0425 19:45:47.386148   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     </interface>
	I0425 19:45:47.386153   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <interface type='network'>
	I0425 19:45:47.386159   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <source network='default'/>
	I0425 19:45:47.386166   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <model type='virtio'/>
	I0425 19:45:47.386172   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     </interface>
	I0425 19:45:47.386176   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <serial type='pty'>
	I0425 19:45:47.386181   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <target port='0'/>
	I0425 19:45:47.386185   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     </serial>
	I0425 19:45:47.386190   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <console type='pty'>
	I0425 19:45:47.386195   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <target type='serial' port='0'/>
	I0425 19:45:47.386200   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     </console>
	I0425 19:45:47.386221   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     <rng model='virtio'>
	I0425 19:45:47.386237   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)       <backend model='random'>/dev/random</backend>
	I0425 19:45:47.386243   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     </rng>
	I0425 19:45:47.386251   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     
	I0425 19:45:47.386259   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)     
	I0425 19:45:47.386266   54245 main.go:141] libmachine: (kubernetes-upgrade-215221)   </devices>
	I0425 19:45:47.386271   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) </domain>
	I0425 19:45:47.386278   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) 
	I0425 19:45:47.390659   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:9b:b0:87 in network default
	I0425 19:45:47.391239   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Ensuring networks are active...
	I0425 19:45:47.391265   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:47.391970   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Ensuring network default is active
	I0425 19:45:47.392227   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Ensuring network mk-kubernetes-upgrade-215221 is active
	I0425 19:45:47.392857   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Getting domain xml...
	I0425 19:45:47.393588   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Creating domain...
	I0425 19:45:48.671320   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Waiting to get IP...
	I0425 19:45:48.672089   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:48.672525   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:45:48.672553   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:48.672489   54295 retry.go:31] will retry after 221.933134ms: waiting for machine to come up
	I0425 19:45:48.896063   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:48.896548   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:45:48.896569   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:48.896500   54295 retry.go:31] will retry after 386.833691ms: waiting for machine to come up
	I0425 19:45:49.284845   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:49.285190   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:45:49.285217   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:49.285153   54295 retry.go:31] will retry after 412.334573ms: waiting for machine to come up
	I0425 19:45:49.698648   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:49.699167   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:45:49.699196   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:49.699134   54295 retry.go:31] will retry after 507.953703ms: waiting for machine to come up
	I0425 19:45:50.208803   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:50.209236   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:45:50.209262   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:50.209184   54295 retry.go:31] will retry after 623.742091ms: waiting for machine to come up
	I0425 19:45:50.835882   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:50.836407   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:45:50.836437   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:50.836334   54295 retry.go:31] will retry after 917.514745ms: waiting for machine to come up
	I0425 19:45:51.755505   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:51.755881   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:45:51.755910   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:51.755835   54295 retry.go:31] will retry after 801.022219ms: waiting for machine to come up
	I0425 19:45:52.558108   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:52.558547   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:45:52.558579   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:52.558486   54295 retry.go:31] will retry after 1.062025323s: waiting for machine to come up
	I0425 19:45:53.621791   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:53.622209   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:45:53.622238   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:53.622164   54295 retry.go:31] will retry after 1.767605129s: waiting for machine to come up
	I0425 19:45:55.392085   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:55.392552   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:45:55.392583   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:55.392502   54295 retry.go:31] will retry after 1.951013289s: waiting for machine to come up
	I0425 19:45:57.344938   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:45:57.345563   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:45:57.345592   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:45:57.345418   54295 retry.go:31] will retry after 2.754992143s: waiting for machine to come up
	I0425 19:46:00.103816   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:00.104339   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:46:00.104367   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:46:00.104301   54295 retry.go:31] will retry after 2.839073772s: waiting for machine to come up
	I0425 19:46:02.945106   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:02.945583   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:46:02.945604   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:46:02.945546   54295 retry.go:31] will retry after 3.245864417s: waiting for machine to come up
	I0425 19:46:06.192720   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:06.193163   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find current IP address of domain kubernetes-upgrade-215221 in network mk-kubernetes-upgrade-215221
	I0425 19:46:06.193191   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | I0425 19:46:06.193131   54295 retry.go:31] will retry after 4.944208386s: waiting for machine to come up
	I0425 19:46:11.141852   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.142473   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has current primary IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.142503   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Found IP for machine: 192.168.61.198
	I0425 19:46:11.142540   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Reserving static IP address...
	I0425 19:46:11.142927   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-215221", mac: "52:54:00:37:82:3d", ip: "192.168.61.198"} in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.278998   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Getting to WaitForSSH function...
	I0425 19:46:11.279025   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Reserved static IP address: 192.168.61.198
	I0425 19:46:11.279040   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Waiting for SSH to be available...
	I0425 19:46:11.281910   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.282427   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:minikube Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:11.282458   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.282664   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Using SSH client type: external
	I0425 19:46:11.282693   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221/id_rsa (-rw-------)
	I0425 19:46:11.282734   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 19:46:11.282757   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | About to run SSH command:
	I0425 19:46:11.282768   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | exit 0
	I0425 19:46:11.407157   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | SSH cmd err, output: <nil>: 
	I0425 19:46:11.407571   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) KVM machine creation complete!
	I0425 19:46:11.408012   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetConfigRaw
	I0425 19:46:11.419906   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .DriverName
	I0425 19:46:11.420168   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .DriverName
	I0425 19:46:11.420380   54245 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 19:46:11.420398   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetState
	I0425 19:46:11.421966   54245 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 19:46:11.421988   54245 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 19:46:11.421996   54245 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 19:46:11.422005   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHHostname
	I0425 19:46:11.424504   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.424888   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:minikube Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:11.424913   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.425044   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHPort
	I0425 19:46:11.425232   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:11.425406   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:11.425559   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHUsername
	I0425 19:46:11.425732   54245 main.go:141] libmachine: Using SSH client type: native
	I0425 19:46:11.425945   54245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.198 22 <nil> <nil>}
	I0425 19:46:11.425957   54245 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 19:46:11.530304   54245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:46:11.530332   54245 main.go:141] libmachine: Detecting the provisioner...
	I0425 19:46:11.530342   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHHostname
	I0425 19:46:11.533431   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.533795   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:11.533834   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.534007   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHPort
	I0425 19:46:11.534281   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:11.534456   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:11.534613   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHUsername
	I0425 19:46:11.534771   54245 main.go:141] libmachine: Using SSH client type: native
	I0425 19:46:11.534999   54245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.198 22 <nil> <nil>}
	I0425 19:46:11.535015   54245 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 19:46:11.639568   54245 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 19:46:11.639658   54245 main.go:141] libmachine: found compatible host: buildroot
	I0425 19:46:11.639670   54245 main.go:141] libmachine: Provisioning with buildroot...
	I0425 19:46:11.639682   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetMachineName
	I0425 19:46:11.639941   54245 buildroot.go:166] provisioning hostname "kubernetes-upgrade-215221"
	I0425 19:46:11.639966   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetMachineName
	I0425 19:46:11.640216   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHHostname
	I0425 19:46:11.642784   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.643079   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:11.643113   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.643306   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHPort
	I0425 19:46:11.643586   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:11.643766   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:11.643935   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHUsername
	I0425 19:46:11.644105   54245 main.go:141] libmachine: Using SSH client type: native
	I0425 19:46:11.644293   54245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.198 22 <nil> <nil>}
	I0425 19:46:11.644311   54245 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-215221 && echo "kubernetes-upgrade-215221" | sudo tee /etc/hostname
	I0425 19:46:11.763619   54245 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-215221
	
	I0425 19:46:11.763681   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHHostname
	I0425 19:46:11.766494   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.766910   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:11.766948   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.767109   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHPort
	I0425 19:46:11.767328   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:11.767527   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:11.767702   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHUsername
	I0425 19:46:11.767902   54245 main.go:141] libmachine: Using SSH client type: native
	I0425 19:46:11.768105   54245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.198 22 <nil> <nil>}
	I0425 19:46:11.768125   54245 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-215221' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-215221/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-215221' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 19:46:11.881487   54245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:46:11.881515   54245 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 19:46:11.881549   54245 buildroot.go:174] setting up certificates
	I0425 19:46:11.881558   54245 provision.go:84] configureAuth start
	I0425 19:46:11.881568   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetMachineName
	I0425 19:46:11.881889   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetIP
	I0425 19:46:11.884423   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.884780   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:11.884812   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.884999   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHHostname
	I0425 19:46:11.886851   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.887193   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:11.887227   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:11.887301   54245 provision.go:143] copyHostCerts
	I0425 19:46:11.887359   54245 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 19:46:11.887370   54245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:46:11.887423   54245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 19:46:11.887553   54245 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 19:46:11.887563   54245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:46:11.887594   54245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 19:46:11.887646   54245 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 19:46:11.887653   54245 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:46:11.887672   54245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 19:46:11.887719   54245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-215221 san=[127.0.0.1 192.168.61.198 kubernetes-upgrade-215221 localhost minikube]
	I0425 19:46:12.078637   54245 provision.go:177] copyRemoteCerts
	I0425 19:46:12.078690   54245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 19:46:12.078718   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHHostname
	I0425 19:46:12.081145   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.081480   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:12.081507   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.081697   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHPort
	I0425 19:46:12.081887   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:12.082044   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHUsername
	I0425 19:46:12.082220   54245 sshutil.go:53] new ssh client: &{IP:192.168.61.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221/id_rsa Username:docker}
	I0425 19:46:12.170827   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 19:46:12.198338   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 19:46:12.225953   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0425 19:46:12.253084   54245 provision.go:87] duration metric: took 371.511501ms to configureAuth
	I0425 19:46:12.253119   54245 buildroot.go:189] setting minikube options for container-runtime
	I0425 19:46:12.253271   54245 config.go:182] Loaded profile config "kubernetes-upgrade-215221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 19:46:12.253340   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHHostname
	I0425 19:46:12.255848   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.256146   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:12.256178   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.256345   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHPort
	I0425 19:46:12.256559   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:12.256726   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:12.256848   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHUsername
	I0425 19:46:12.256987   54245 main.go:141] libmachine: Using SSH client type: native
	I0425 19:46:12.257180   54245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.198 22 <nil> <nil>}
	I0425 19:46:12.257198   54245 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 19:46:12.548715   54245 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 19:46:12.548751   54245 main.go:141] libmachine: Checking connection to Docker...
	I0425 19:46:12.548763   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetURL
	I0425 19:46:12.550060   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | Using libvirt version 6000000
	I0425 19:46:12.552550   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.552908   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:12.552946   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.553134   54245 main.go:141] libmachine: Docker is up and running!
	I0425 19:46:12.553149   54245 main.go:141] libmachine: Reticulating splines...
	I0425 19:46:12.553160   54245 client.go:171] duration metric: took 25.753778002s to LocalClient.Create
	I0425 19:46:12.553181   54245 start.go:167] duration metric: took 25.753838407s to libmachine.API.Create "kubernetes-upgrade-215221"
	I0425 19:46:12.553190   54245 start.go:293] postStartSetup for "kubernetes-upgrade-215221" (driver="kvm2")
	I0425 19:46:12.553200   54245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 19:46:12.553232   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .DriverName
	I0425 19:46:12.553487   54245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 19:46:12.553517   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHHostname
	I0425 19:46:12.555750   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.556149   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:12.556192   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.556372   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHPort
	I0425 19:46:12.556550   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:12.556724   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHUsername
	I0425 19:46:12.556882   54245 sshutil.go:53] new ssh client: &{IP:192.168.61.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221/id_rsa Username:docker}
	I0425 19:46:12.638879   54245 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 19:46:12.643664   54245 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 19:46:12.643685   54245 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 19:46:12.643743   54245 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 19:46:12.643840   54245 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 19:46:12.643930   54245 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 19:46:12.655227   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:46:12.682910   54245 start.go:296] duration metric: took 129.707909ms for postStartSetup
	I0425 19:46:12.682966   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetConfigRaw
	I0425 19:46:12.683563   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetIP
	I0425 19:46:12.686283   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.686606   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:12.686641   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.686841   54245 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/config.json ...
	I0425 19:46:12.687087   54245 start.go:128] duration metric: took 25.907621643s to createHost
	I0425 19:46:12.687122   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHHostname
	I0425 19:46:12.689295   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.689635   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:12.689664   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.689745   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHPort
	I0425 19:46:12.689943   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:12.690100   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:12.690284   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHUsername
	I0425 19:46:12.690457   54245 main.go:141] libmachine: Using SSH client type: native
	I0425 19:46:12.690630   54245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.198 22 <nil> <nil>}
	I0425 19:46:12.690643   54245 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0425 19:46:12.795568   54245 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714074372.779585880
	
	I0425 19:46:12.795595   54245 fix.go:216] guest clock: 1714074372.779585880
	I0425 19:46:12.795606   54245 fix.go:229] Guest: 2024-04-25 19:46:12.77958588 +0000 UTC Remote: 2024-04-25 19:46:12.687101711 +0000 UTC m=+28.586995886 (delta=92.484169ms)
	I0425 19:46:12.795632   54245 fix.go:200] guest clock delta is within tolerance: 92.484169ms
	I0425 19:46:12.795638   54245 start.go:83] releasing machines lock for "kubernetes-upgrade-215221", held for 26.016341192s
	I0425 19:46:12.795670   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .DriverName
	I0425 19:46:12.795931   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetIP
	I0425 19:46:12.798632   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.798941   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:12.798970   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.799118   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .DriverName
	I0425 19:46:12.799657   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .DriverName
	I0425 19:46:12.799844   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .DriverName
	I0425 19:46:12.799934   54245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 19:46:12.799978   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHHostname
	I0425 19:46:12.800100   54245 ssh_runner.go:195] Run: cat /version.json
	I0425 19:46:12.800127   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHHostname
	I0425 19:46:12.803072   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.803119   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.803412   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:12.803440   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:12.803477   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.803512   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:12.803580   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHPort
	I0425 19:46:12.803735   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHPort
	I0425 19:46:12.803829   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:12.803924   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHKeyPath
	I0425 19:46:12.804002   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHUsername
	I0425 19:46:12.804076   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetSSHUsername
	I0425 19:46:12.804118   54245 sshutil.go:53] new ssh client: &{IP:192.168.61.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221/id_rsa Username:docker}
	I0425 19:46:12.804187   54245 sshutil.go:53] new ssh client: &{IP:192.168.61.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/kubernetes-upgrade-215221/id_rsa Username:docker}
	I0425 19:46:12.884581   54245 ssh_runner.go:195] Run: systemctl --version
	I0425 19:46:12.914241   54245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 19:46:13.090052   54245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 19:46:13.097627   54245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 19:46:13.097708   54245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 19:46:13.117270   54245 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 19:46:13.117294   54245 start.go:494] detecting cgroup driver to use...
	I0425 19:46:13.117367   54245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 19:46:13.143816   54245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 19:46:13.160957   54245 docker.go:217] disabling cri-docker service (if available) ...
	I0425 19:46:13.161018   54245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 19:46:13.176653   54245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 19:46:13.196263   54245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 19:46:13.340295   54245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 19:46:13.509147   54245 docker.go:233] disabling docker service ...
	I0425 19:46:13.509215   54245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 19:46:13.529766   54245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 19:46:13.545878   54245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 19:46:13.708267   54245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 19:46:13.849225   54245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 19:46:13.867558   54245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 19:46:13.891100   54245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0425 19:46:13.891173   54245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:46:13.904231   54245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 19:46:13.904292   54245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:46:13.918262   54245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:46:13.931717   54245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:46:13.945277   54245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 19:46:13.958809   54245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 19:46:13.969723   54245 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 19:46:13.969783   54245 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 19:46:13.997735   54245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 19:46:14.009245   54245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:46:14.144292   54245 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 19:46:14.300625   54245 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 19:46:14.300704   54245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 19:46:14.306566   54245 start.go:562] Will wait 60s for crictl version
	I0425 19:46:14.306628   54245 ssh_runner.go:195] Run: which crictl
	I0425 19:46:14.311329   54245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 19:46:14.353589   54245 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 19:46:14.353680   54245 ssh_runner.go:195] Run: crio --version
	I0425 19:46:14.399407   54245 ssh_runner.go:195] Run: crio --version
	I0425 19:46:14.436614   54245 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0425 19:46:14.438407   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) Calling .GetIP
	I0425 19:46:14.441420   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:14.441760   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:82:3d", ip: ""} in network mk-kubernetes-upgrade-215221: {Iface:virbr1 ExpiryTime:2024-04-25 20:46:03 +0000 UTC Type:0 Mac:52:54:00:37:82:3d Iaid: IPaddr:192.168.61.198 Prefix:24 Hostname:kubernetes-upgrade-215221 Clientid:01:52:54:00:37:82:3d}
	I0425 19:46:14.441791   54245 main.go:141] libmachine: (kubernetes-upgrade-215221) DBG | domain kubernetes-upgrade-215221 has defined IP address 192.168.61.198 and MAC address 52:54:00:37:82:3d in network mk-kubernetes-upgrade-215221
	I0425 19:46:14.442057   54245 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 19:46:14.447374   54245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 19:46:14.463592   54245 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-215221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-215221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.198 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 19:46:14.463742   54245 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 19:46:14.463805   54245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:46:14.503246   54245 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 19:46:14.503343   54245 ssh_runner.go:195] Run: which lz4
	I0425 19:46:14.508343   54245 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0425 19:46:14.513240   54245 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 19:46:14.513287   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0425 19:46:16.677238   54245 crio.go:462] duration metric: took 2.168938419s to copy over tarball
	I0425 19:46:16.677314   54245 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 19:46:19.615604   54245 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.938240913s)
	I0425 19:46:19.615638   54245 crio.go:469] duration metric: took 2.938372008s to extract the tarball
	I0425 19:46:19.615650   54245 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 19:46:19.664279   54245 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:46:19.715216   54245 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 19:46:19.715244   54245 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 19:46:19.715322   54245 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 19:46:19.715347   54245 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 19:46:19.715370   54245 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0425 19:46:19.715385   54245 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 19:46:19.715420   54245 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0425 19:46:19.715556   54245 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 19:46:19.715597   54245 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0425 19:46:19.715555   54245 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 19:46:19.716928   54245 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 19:46:19.717616   54245 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0425 19:46:19.717665   54245 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 19:46:19.717624   54245 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0425 19:46:19.717747   54245 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 19:46:19.717760   54245 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 19:46:19.717627   54245 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0425 19:46:19.717658   54245 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 19:46:19.923681   54245 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0425 19:46:19.941613   54245 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0425 19:46:19.961432   54245 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 19:46:19.980286   54245 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0425 19:46:19.993274   54245 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0425 19:46:19.993336   54245 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0425 19:46:19.993391   54245 ssh_runner.go:195] Run: which crictl
	I0425 19:46:20.025954   54245 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0425 19:46:20.026003   54245 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 19:46:20.026052   54245 ssh_runner.go:195] Run: which crictl
	I0425 19:46:20.064093   54245 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0425 19:46:20.064150   54245 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 19:46:20.064198   54245 ssh_runner.go:195] Run: which crictl
	I0425 19:46:20.072208   54245 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0425 19:46:20.072249   54245 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0425 19:46:20.072277   54245 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0425 19:46:20.072328   54245 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 19:46:20.072363   54245 ssh_runner.go:195] Run: which crictl
	I0425 19:46:20.072284   54245 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 19:46:20.112518   54245 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0425 19:46:20.123570   54245 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0425 19:46:20.141264   54245 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0425 19:46:20.167091   54245 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0425 19:46:20.167247   54245 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0425 19:46:20.189910   54245 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0425 19:46:20.190045   54245 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0425 19:46:20.280289   54245 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0425 19:46:20.280412   54245 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 19:46:20.280431   54245 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0425 19:46:20.280449   54245 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0425 19:46:20.280472   54245 ssh_runner.go:195] Run: which crictl
	I0425 19:46:20.280474   54245 ssh_runner.go:195] Run: which crictl
	I0425 19:46:20.281902   54245 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0425 19:46:20.281938   54245 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0425 19:46:20.281979   54245 ssh_runner.go:195] Run: which crictl
	I0425 19:46:20.294750   54245 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0425 19:46:20.294811   54245 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0425 19:46:20.294929   54245 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0425 19:46:20.297698   54245 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0425 19:46:20.362166   54245 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0425 19:46:20.362386   54245 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0425 19:46:20.379226   54245 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0425 19:46:20.614474   54245 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 19:46:20.767854   54245 cache_images.go:92] duration metric: took 1.05257542s to LoadCachedImages
	W0425 19:46:20.767980   54245 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0425 19:46:20.767998   54245 kubeadm.go:928] updating node { 192.168.61.198 8443 v1.20.0 crio true true} ...
	I0425 19:46:20.768136   54245 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-215221 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-215221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 19:46:20.768210   54245 ssh_runner.go:195] Run: crio config
	I0425 19:46:20.820585   54245 cni.go:84] Creating CNI manager for ""
	I0425 19:46:20.820609   54245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:46:20.820621   54245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 19:46:20.820637   54245 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.198 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-215221 NodeName:kubernetes-upgrade-215221 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0425 19:46:20.820783   54245 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-215221"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 19:46:20.820845   54245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0425 19:46:20.834173   54245 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 19:46:20.834263   54245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 19:46:20.845960   54245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0425 19:46:20.868143   54245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 19:46:20.889250   54245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0425 19:46:20.910891   54245 ssh_runner.go:195] Run: grep 192.168.61.198	control-plane.minikube.internal$ /etc/hosts
	I0425 19:46:20.915764   54245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 19:46:20.930790   54245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:46:21.065471   54245 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:46:21.085475   54245 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221 for IP: 192.168.61.198
	I0425 19:46:21.085517   54245 certs.go:194] generating shared ca certs ...
	I0425 19:46:21.085540   54245 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:46:21.085733   54245 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 19:46:21.085791   54245 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 19:46:21.085803   54245 certs.go:256] generating profile certs ...
	I0425 19:46:21.085888   54245 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/client.key
	I0425 19:46:21.085909   54245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/client.crt with IP's: []
	I0425 19:46:21.411922   54245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/client.crt ...
	I0425 19:46:21.411954   54245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/client.crt: {Name:mkb3e4aa87b069baa1d74473cf27971c5c15be3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:46:21.412150   54245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/client.key ...
	I0425 19:46:21.412167   54245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/client.key: {Name:mk3ca857e5bc3909b1e064e5b97b5ac4746f142f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:46:21.412269   54245 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.key.39c2a78a
	I0425 19:46:21.412293   54245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.crt.39c2a78a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.198]
	I0425 19:46:21.481416   54245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.crt.39c2a78a ...
	I0425 19:46:21.481448   54245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.crt.39c2a78a: {Name:mk80049edff070d5d57b532095fa5b337234793b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:46:21.481633   54245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.key.39c2a78a ...
	I0425 19:46:21.481651   54245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.key.39c2a78a: {Name:mk70af194794abfaf522efb998a95332f0faf4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:46:21.481743   54245 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.crt.39c2a78a -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.crt
	I0425 19:46:21.481857   54245 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.key.39c2a78a -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.key
	I0425 19:46:21.481938   54245 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/proxy-client.key
	I0425 19:46:21.481960   54245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/proxy-client.crt with IP's: []
	I0425 19:46:21.778502   54245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/proxy-client.crt ...
	I0425 19:46:21.778568   54245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/proxy-client.crt: {Name:mka935e82d8d7cdf30f54b173aea47a3f951ab91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:46:21.778724   54245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/proxy-client.key ...
	I0425 19:46:21.778743   54245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/proxy-client.key: {Name:mkef5652b0d33e30f9f64ebfa0b6a7aa84ab5244 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:46:21.778981   54245 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 19:46:21.779035   54245 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 19:46:21.779046   54245 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 19:46:21.779077   54245 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 19:46:21.779101   54245 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 19:46:21.779121   54245 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 19:46:21.779157   54245 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:46:21.779682   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 19:46:21.818505   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 19:46:21.854744   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 19:46:21.884568   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 19:46:21.934854   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0425 19:46:21.978993   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 19:46:22.025292   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 19:46:22.055604   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0425 19:46:22.119852   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 19:46:22.148492   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 19:46:22.177601   54245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 19:46:22.213775   54245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 19:46:22.235454   54245 ssh_runner.go:195] Run: openssl version
	I0425 19:46:22.242891   54245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 19:46:22.259757   54245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:46:22.265654   54245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:46:22.265731   54245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:46:22.272687   54245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 19:46:22.286770   54245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 19:46:22.300340   54245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 19:46:22.306306   54245 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 19:46:22.306377   54245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 19:46:22.313485   54245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 19:46:22.329137   54245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 19:46:22.343834   54245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 19:46:22.349737   54245 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:46:22.349800   54245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 19:46:22.356886   54245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 19:46:22.375050   54245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 19:46:22.382828   54245 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 19:46:22.382889   54245 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-215221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-215221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.198 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:46:22.382991   54245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 19:46:22.383047   54245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 19:46:22.435008   54245 cri.go:89] found id: ""
	I0425 19:46:22.435090   54245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0425 19:46:22.447568   54245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 19:46:22.459858   54245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 19:46:22.471910   54245 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 19:46:22.471942   54245 kubeadm.go:156] found existing configuration files:
	
	I0425 19:46:22.471997   54245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 19:46:22.485236   54245 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 19:46:22.485307   54245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 19:46:22.498428   54245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 19:46:22.510927   54245 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 19:46:22.511009   54245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 19:46:22.524514   54245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 19:46:22.536835   54245 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 19:46:22.536911   54245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 19:46:22.549531   54245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 19:46:22.561889   54245 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 19:46:22.561970   54245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 19:46:22.575379   54245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 19:46:22.719065   54245 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 19:46:22.719747   54245 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 19:46:22.930370   54245 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 19:46:22.930544   54245 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 19:46:22.930699   54245 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 19:46:23.166694   54245 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 19:46:23.298978   54245 out.go:204]   - Generating certificates and keys ...
	I0425 19:46:23.299126   54245 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 19:46:23.299226   54245 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 19:46:23.607692   54245 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0425 19:46:23.863142   54245 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0425 19:46:24.015206   54245 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0425 19:46:24.215164   54245 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0425 19:46:24.362787   54245 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0425 19:46:24.363036   54245 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-215221 localhost] and IPs [192.168.61.198 127.0.0.1 ::1]
	I0425 19:46:24.500533   54245 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0425 19:46:24.500766   54245 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-215221 localhost] and IPs [192.168.61.198 127.0.0.1 ::1]
	I0425 19:46:24.635794   54245 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0425 19:46:24.858063   54245 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0425 19:46:25.019136   54245 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0425 19:46:25.019436   54245 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 19:46:25.409253   54245 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 19:46:25.604333   54245 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 19:46:25.910480   54245 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 19:46:26.007012   54245 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 19:46:26.026050   54245 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 19:46:26.027991   54245 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 19:46:26.028056   54245 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 19:46:26.181526   54245 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 19:46:26.184852   54245 out.go:204]   - Booting up control plane ...
	I0425 19:46:26.184988   54245 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 19:46:26.201598   54245 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 19:46:26.205407   54245 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 19:46:26.207080   54245 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 19:46:26.213988   54245 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 19:47:06.211824   54245 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 19:47:06.214325   54245 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:47:06.214655   54245 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:47:11.215485   54245 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:47:11.216229   54245 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:47:21.217022   54245 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:47:21.217353   54245 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:47:41.218857   54245 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:47:41.219165   54245 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:48:21.218707   54245 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:48:21.218947   54245 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:48:21.218959   54245 kubeadm.go:309] 
	I0425 19:48:21.218994   54245 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 19:48:21.219038   54245 kubeadm.go:309] 		timed out waiting for the condition
	I0425 19:48:21.219079   54245 kubeadm.go:309] 
	I0425 19:48:21.219156   54245 kubeadm.go:309] 	This error is likely caused by:
	I0425 19:48:21.219225   54245 kubeadm.go:309] 		- The kubelet is not running
	I0425 19:48:21.219398   54245 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 19:48:21.219414   54245 kubeadm.go:309] 
	I0425 19:48:21.219559   54245 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 19:48:21.219621   54245 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 19:48:21.219664   54245 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 19:48:21.219671   54245 kubeadm.go:309] 
	I0425 19:48:21.219822   54245 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 19:48:21.219949   54245 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 19:48:21.219962   54245 kubeadm.go:309] 
	I0425 19:48:21.220091   54245 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 19:48:21.220200   54245 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 19:48:21.220310   54245 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 19:48:21.220395   54245 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 19:48:21.220403   54245 kubeadm.go:309] 
	I0425 19:48:21.221043   54245 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 19:48:21.221134   54245 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 19:48:21.221219   54245 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0425 19:48:21.221372   54245 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-215221 localhost] and IPs [192.168.61.198 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-215221 localhost] and IPs [192.168.61.198 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-215221 localhost] and IPs [192.168.61.198 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-215221 localhost] and IPs [192.168.61.198 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0425 19:48:21.221432   54245 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 19:48:23.485468   54245 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.263994867s)
	I0425 19:48:23.485549   54245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 19:48:23.501660   54245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 19:48:23.516462   54245 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 19:48:23.516486   54245 kubeadm.go:156] found existing configuration files:
	
	I0425 19:48:23.516538   54245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 19:48:23.529985   54245 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 19:48:23.530045   54245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 19:48:23.543992   54245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 19:48:23.554742   54245 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 19:48:23.554800   54245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 19:48:23.565938   54245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 19:48:23.576777   54245 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 19:48:23.576840   54245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 19:48:23.587950   54245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 19:48:23.598746   54245 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 19:48:23.598828   54245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 19:48:23.609563   54245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 19:48:23.688399   54245 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 19:48:23.688525   54245 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 19:48:23.864425   54245 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 19:48:23.864616   54245 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 19:48:23.864753   54245 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 19:48:24.118099   54245 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 19:48:24.121219   54245 out.go:204]   - Generating certificates and keys ...
	I0425 19:48:24.121325   54245 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 19:48:24.121377   54245 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 19:48:24.121452   54245 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 19:48:24.121535   54245 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 19:48:24.121620   54245 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 19:48:24.121681   54245 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 19:48:24.121765   54245 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 19:48:24.121837   54245 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 19:48:24.121926   54245 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 19:48:24.122358   54245 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 19:48:24.122416   54245 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 19:48:24.122502   54245 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 19:48:24.418300   54245 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 19:48:24.779987   54245 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 19:48:25.129619   54245 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 19:48:25.664336   54245 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 19:48:25.681383   54245 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 19:48:25.683214   54245 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 19:48:25.683289   54245 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 19:48:25.858520   54245 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 19:48:25.860887   54245 out.go:204]   - Booting up control plane ...
	I0425 19:48:25.861044   54245 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 19:48:25.862866   54245 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 19:48:25.863989   54245 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 19:48:25.864743   54245 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 19:48:25.868118   54245 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 19:49:05.869196   54245 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 19:49:05.869620   54245 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:49:05.869811   54245 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:49:10.870904   54245 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:49:10.871209   54245 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:49:20.871901   54245 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:49:20.872191   54245 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:49:40.873097   54245 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:49:40.873287   54245 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:50:20.873833   54245 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:50:20.874045   54245 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:50:20.874055   54245 kubeadm.go:309] 
	I0425 19:50:20.874101   54245 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 19:50:20.874148   54245 kubeadm.go:309] 		timed out waiting for the condition
	I0425 19:50:20.874158   54245 kubeadm.go:309] 
	I0425 19:50:20.874242   54245 kubeadm.go:309] 	This error is likely caused by:
	I0425 19:50:20.874292   54245 kubeadm.go:309] 		- The kubelet is not running
	I0425 19:50:20.874449   54245 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 19:50:20.874484   54245 kubeadm.go:309] 
	I0425 19:50:20.874648   54245 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 19:50:20.874707   54245 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 19:50:20.874743   54245 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 19:50:20.874749   54245 kubeadm.go:309] 
	I0425 19:50:20.874833   54245 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 19:50:20.874945   54245 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 19:50:20.874964   54245 kubeadm.go:309] 
	I0425 19:50:20.875124   54245 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 19:50:20.875248   54245 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 19:50:20.875380   54245 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 19:50:20.875486   54245 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 19:50:20.875502   54245 kubeadm.go:309] 
	I0425 19:50:20.876507   54245 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 19:50:20.876624   54245 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 19:50:20.876716   54245 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0425 19:50:20.876790   54245 kubeadm.go:393] duration metric: took 3m58.493904993s to StartCluster
	I0425 19:50:20.876835   54245 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 19:50:20.876887   54245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 19:50:20.951818   54245 cri.go:89] found id: ""
	I0425 19:50:20.951845   54245 logs.go:276] 0 containers: []
	W0425 19:50:20.951854   54245 logs.go:278] No container was found matching "kube-apiserver"
	I0425 19:50:20.951862   54245 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 19:50:20.951917   54245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 19:50:21.001257   54245 cri.go:89] found id: ""
	I0425 19:50:21.001280   54245 logs.go:276] 0 containers: []
	W0425 19:50:21.001289   54245 logs.go:278] No container was found matching "etcd"
	I0425 19:50:21.001296   54245 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 19:50:21.001349   54245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 19:50:21.056628   54245 cri.go:89] found id: ""
	I0425 19:50:21.056658   54245 logs.go:276] 0 containers: []
	W0425 19:50:21.056670   54245 logs.go:278] No container was found matching "coredns"
	I0425 19:50:21.056678   54245 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 19:50:21.056743   54245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 19:50:21.126802   54245 cri.go:89] found id: ""
	I0425 19:50:21.126825   54245 logs.go:276] 0 containers: []
	W0425 19:50:21.126836   54245 logs.go:278] No container was found matching "kube-scheduler"
	I0425 19:50:21.126843   54245 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 19:50:21.126899   54245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 19:50:21.173951   54245 cri.go:89] found id: ""
	I0425 19:50:21.173980   54245 logs.go:276] 0 containers: []
	W0425 19:50:21.173988   54245 logs.go:278] No container was found matching "kube-proxy"
	I0425 19:50:21.173996   54245 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 19:50:21.174047   54245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 19:50:21.222763   54245 cri.go:89] found id: ""
	I0425 19:50:21.222794   54245 logs.go:276] 0 containers: []
	W0425 19:50:21.222804   54245 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 19:50:21.222812   54245 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 19:50:21.222869   54245 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 19:50:21.267203   54245 cri.go:89] found id: ""
	I0425 19:50:21.267235   54245 logs.go:276] 0 containers: []
	W0425 19:50:21.267247   54245 logs.go:278] No container was found matching "kindnet"
	I0425 19:50:21.267259   54245 logs.go:123] Gathering logs for describe nodes ...
	I0425 19:50:21.267280   54245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 19:50:21.433317   54245 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 19:50:21.433346   54245 logs.go:123] Gathering logs for CRI-O ...
	I0425 19:50:21.433364   54245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 19:50:21.603338   54245 logs.go:123] Gathering logs for container status ...
	I0425 19:50:21.603388   54245 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 19:50:21.659631   54245 logs.go:123] Gathering logs for kubelet ...
	I0425 19:50:21.659668   54245 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 19:50:21.741743   54245 logs.go:123] Gathering logs for dmesg ...
	I0425 19:50:21.741801   54245 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0425 19:50:21.760242   54245 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0425 19:50:21.760311   54245 out.go:239] * 
	* 
	W0425 19:50:21.760533   54245 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 19:50:21.760569   54245 out.go:239] * 
	* 
	W0425 19:50:21.761637   54245 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 19:50:21.765277   54245 out.go:177] 
	W0425 19:50:21.767054   54245 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 19:50:21.767124   54245 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0425 19:50:21.767190   54245 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0425 19:50:21.769569   54245 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-215221 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-215221
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-215221: (3.345307281s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-215221 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-215221 status --format={{.Host}}: exit status 7 (87.172017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-215221 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-215221 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.432701776s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-215221 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-215221 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-215221 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (109.915619ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-215221] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18757
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-215221
	    minikube start -p kubernetes-upgrade-215221 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2152212 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-215221 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-215221 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-215221 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (26.337000506s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-25 19:51:58.227403816 +0000 UTC m=+4844.636338383
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-215221 -n kubernetes-upgrade-215221
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-215221 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-215221 logs -n 25: (1.468282128s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo cat                            | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo cat                            | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo cat                            | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo docker                         | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo cat                            | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo cat                            | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo cat                            | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo cat                            | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-120641 pgrep                       | custom-flannel-120641 | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | -a kubelet                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo                                | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo find                           | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-120641 sudo crio                           | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p calico-120641                                     | calico-120641         | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC | 25 Apr 24 19:51 UTC |
	| start   | -p flannel-120641                                    | flannel-120641        | jenkins | v1.33.0 | 25 Apr 24 19:51 UTC |                     |
	|         | --memory=3072                                        |                       |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:51:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:51:50.523797   63050 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:51:50.524103   63050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:51:50.524114   63050 out.go:304] Setting ErrFile to fd 2...
	I0425 19:51:50.524121   63050 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:51:50.524418   63050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:51:50.525119   63050 out.go:298] Setting JSON to false
	I0425 19:51:50.526483   63050 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5656,"bootTime":1714069054,"procs":286,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:51:50.526591   63050 start.go:139] virtualization: kvm guest
	I0425 19:51:50.529606   63050 out.go:177] * [flannel-120641] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:51:50.531170   63050 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:51:50.531108   63050 notify.go:220] Checking for updates...
	I0425 19:51:50.532564   63050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:51:50.534062   63050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:51:50.535649   63050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:51:50.538394   63050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:51:50.540351   63050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:51:50.542963   63050 config.go:182] Loaded profile config "custom-flannel-120641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:51:50.543144   63050 config.go:182] Loaded profile config "enable-default-cni-120641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:51:50.543303   63050 config.go:182] Loaded profile config "kubernetes-upgrade-215221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:51:50.543470   63050 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:51:50.594273   63050 out.go:177] * Using the kvm2 driver based on user configuration
	I0425 19:51:50.595431   63050 start.go:297] selected driver: kvm2
	I0425 19:51:50.595449   63050 start.go:901] validating driver "kvm2" against <nil>
	I0425 19:51:50.595463   63050 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:51:50.596507   63050 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:51:50.596607   63050 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:51:50.614317   63050 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:51:50.614390   63050 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 19:51:50.614679   63050 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:51:50.614714   63050 cni.go:84] Creating CNI manager for "flannel"
	I0425 19:51:50.614724   63050 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0425 19:51:50.614786   63050 start.go:340] cluster config:
	{Name:flannel-120641 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:flannel-120641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:51:50.614905   63050 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:51:50.616423   63050 out.go:177] * Starting "flannel-120641" primary control-plane node in "flannel-120641" cluster
	I0425 19:51:50.617670   63050 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:51:50.617714   63050 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:51:50.617723   63050 cache.go:56] Caching tarball of preloaded images
	I0425 19:51:50.617829   63050 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:51:50.617842   63050 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 19:51:50.617974   63050 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/config.json ...
	I0425 19:51:50.618001   63050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/config.json: {Name:mkb3e79d18c1fa4b3715cc28efd52536ee817a7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:51:50.618156   63050 start.go:360] acquireMachinesLock for flannel-120641: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:51:50.618193   63050 start.go:364] duration metric: took 19.318µs to acquireMachinesLock for "flannel-120641"
	I0425 19:51:50.618238   63050 start.go:93] Provisioning new machine with config: &{Name:flannel-120641 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.0 ClusterName:flannel-120641 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 19:51:50.618333   63050 start.go:125] createHost starting for "" (driver="kvm2")
	I0425 19:51:46.974399   61538 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:51:46.974427   61538 crio.go:433] Images already preloaded, skipping extraction
	I0425 19:51:46.974483   61538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:51:47.022996   61538 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:51:47.023026   61538 cache_images.go:84] Images are preloaded, skipping loading
	I0425 19:51:47.023036   61538 kubeadm.go:928] updating node { 192.168.61.198 8443 v1.30.0 crio true true} ...
	I0425 19:51:47.023163   61538 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-215221 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-215221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 19:51:47.023233   61538 ssh_runner.go:195] Run: crio config
	I0425 19:51:47.093918   61538 cni.go:84] Creating CNI manager for ""
	I0425 19:51:47.093950   61538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:51:47.093969   61538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 19:51:47.093997   61538 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.198 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-215221 NodeName:kubernetes-upgrade-215221 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 19:51:47.094197   61538 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.198
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-215221"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 19:51:47.094289   61538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 19:51:47.107502   61538 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 19:51:47.107565   61538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 19:51:47.122812   61538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0425 19:51:47.145906   61538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 19:51:47.168222   61538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0425 19:51:47.190827   61538 ssh_runner.go:195] Run: grep 192.168.61.198	control-plane.minikube.internal$ /etc/hosts
	I0425 19:51:47.196145   61538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:51:47.334000   61538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:51:47.357456   61538 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221 for IP: 192.168.61.198
	I0425 19:51:47.357476   61538 certs.go:194] generating shared ca certs ...
	I0425 19:51:47.357493   61538 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:51:47.357624   61538 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 19:51:47.357691   61538 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 19:51:47.357704   61538 certs.go:256] generating profile certs ...
	I0425 19:51:47.357801   61538 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/client.key
	I0425 19:51:47.357863   61538 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.key.39c2a78a
	I0425 19:51:47.357910   61538 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/proxy-client.key
	I0425 19:51:47.358050   61538 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 19:51:47.358084   61538 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 19:51:47.358099   61538 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 19:51:47.358135   61538 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 19:51:47.358172   61538 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 19:51:47.358200   61538 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 19:51:47.358283   61538 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:51:47.359118   61538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 19:51:47.397965   61538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 19:51:47.429242   61538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 19:51:47.459140   61538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 19:51:47.489460   61538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0425 19:51:47.517466   61538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 19:51:47.547669   61538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 19:51:47.582527   61538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kubernetes-upgrade-215221/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0425 19:51:47.614386   61538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 19:51:47.648554   61538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 19:51:47.727815   61538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 19:51:47.885280   61538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 19:51:48.011263   61538 ssh_runner.go:195] Run: openssl version
	I0425 19:51:48.058426   61538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 19:51:48.097609   61538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 19:51:48.112751   61538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:51:48.112806   61538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 19:51:48.141792   61538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 19:51:48.178327   61538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 19:51:48.197590   61538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:51:48.204786   61538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:51:48.204841   61538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:51:48.216860   61538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 19:51:48.231664   61538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 19:51:48.246739   61538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 19:51:48.252902   61538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 19:51:48.252959   61538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 19:51:48.263500   61538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 19:51:48.275457   61538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 19:51:48.282468   61538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 19:51:48.290970   61538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 19:51:48.300616   61538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 19:51:48.314607   61538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 19:51:48.332274   61538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 19:51:48.345741   61538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 19:51:48.357761   61538 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-215221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0 ClusterName:kubernetes-upgrade-215221 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.198 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:51:48.357869   61538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 19:51:48.357925   61538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 19:51:48.416876   61538 cri.go:89] found id: "0ad5a93d715454918acc4ac3809aaabff0a639f14a5fe0edb71a887aa6ce8484"
	I0425 19:51:48.416901   61538 cri.go:89] found id: "6b78b896e89cfb106a637397535c2eef7c9823d3288b40b62d4c6fb95ac19098"
	I0425 19:51:48.416906   61538 cri.go:89] found id: "b91b455ed137fa261bedd91eceda80ea86f59536732770766b6b768b8a244984"
	I0425 19:51:48.416911   61538 cri.go:89] found id: "63869f74ffd342f740fa94b0bce91da828725facfe250bae05c68876de046999"
	I0425 19:51:48.416915   61538 cri.go:89] found id: ""
	I0425 19:51:48.416958   61538 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.007451228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714074719007423831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9abfb715-b896-4465-84e7-d61ae522d34d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.008315002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8631fdf7-af8c-4471-a0cb-0a4c3ec3d299 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.008391507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8631fdf7-af8c-4471-a0cb-0a4c3ec3d299 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.008980522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3c42c7606618595cee484a4a5496def2b095e18d09c3a0e9c4a26a8b05942d0,PodSandboxId:56588a14dd7d55b8ff2b9d3c52ec1edfd702a9308f3a606a055bdc4149d96198,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074711495892862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a06014053eab2964771cd1c5d47498b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963182278c07a97891020748dcd4d9535aee0ebea5ab9ad79cecab3bd6d6748b,PodSandboxId:addd3ddaffe01d670484d596cff9f6615472a3a57b4d71e07eca5bf8be592566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074711478139809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16de5f056cc153880a837eaf1c32aebc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2beb5550dfe950e0097c6a32952bdbd450b6fdb5c6cabf03f3745131c2f24a05,PodSandboxId:b9d2c0a41690285a46ea865b2fe19027f84cf44b7d327a8d7d2d451661a8bc8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074711456459710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a805e19323b97e84a4270bfb522c9c30,},Annotations:map[string]string{io.kubernetes.container.hash: cf156875,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1e770c29737abd21ce7fe460b52099b3bccd8c713d8388b26b066ede55b0f0,PodSandboxId:d23e67bc8c58e05c2327c31531171589c51a8cbc6101378a5d6fd2c814da8fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074711404888376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377ba06daedd0f310648fd202080c5ad,},Annotations:map[string]string{io.kubernetes.container.hash: e362d8d5,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad5a93d715454918acc4ac3809aaabff0a639f14a5fe0edb71a887aa6ce8484,PodSandboxId:187a69fd32e5e32c3214b35be78c027b68af9faefa128aa47780aba836c0c3ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074702065391814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a06014053eab2964771cd1c5d47498b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78b896e89cfb106a637397535c2eef7c9823d3288b40b62d4c6fb95ac19098,PodSandboxId:40144175ea4cbf1f7c395c6e788993f36a065e98915b2344d22454397e290344,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074702027827466,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377ba06daedd0f310648fd202080c5ad,},Annotations:map[string]string{io.kubernetes.container.hash: e362d8d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b455ed137fa261bedd91eceda80ea86f59536732770766b6b768b8a244984,PodSandboxId:848942a31f3d133195a905bdeb470b520115034f9314118d9851d927c0282e7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074701956270893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16de5f056cc153880a837eaf1c32aebc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63869f74ffd342f740fa94b0bce91da828725facfe250bae05c68876de046999,PodSandboxId:779c96b46d0005e13ef46b3446bd3867ff032512f22925c237052709c717f3f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074701903696042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a805e19323b97e84a4270bfb522c9c30,},Annotations:map[string]string{io.kubernetes.container.hash: cf156875,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8631fdf7-af8c-4471-a0cb-0a4c3ec3d299 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.049933904Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5bd595d1-db31-4f30-8666-ef7bd6788d82 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.050010124Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5bd595d1-db31-4f30-8666-ef7bd6788d82 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.051627552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fd63dcc-1e3d-4da2-abe4-de671d947f59 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.052169153Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714074719052136371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fd63dcc-1e3d-4da2-abe4-de671d947f59 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.054963298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f087cdc2-cd05-4c22-924e-f52270eb8587 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.055020729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f087cdc2-cd05-4c22-924e-f52270eb8587 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.055629324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3c42c7606618595cee484a4a5496def2b095e18d09c3a0e9c4a26a8b05942d0,PodSandboxId:56588a14dd7d55b8ff2b9d3c52ec1edfd702a9308f3a606a055bdc4149d96198,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074711495892862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a06014053eab2964771cd1c5d47498b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963182278c07a97891020748dcd4d9535aee0ebea5ab9ad79cecab3bd6d6748b,PodSandboxId:addd3ddaffe01d670484d596cff9f6615472a3a57b4d71e07eca5bf8be592566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074711478139809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16de5f056cc153880a837eaf1c32aebc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2beb5550dfe950e0097c6a32952bdbd450b6fdb5c6cabf03f3745131c2f24a05,PodSandboxId:b9d2c0a41690285a46ea865b2fe19027f84cf44b7d327a8d7d2d451661a8bc8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074711456459710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a805e19323b97e84a4270bfb522c9c30,},Annotations:map[string]string{io.kubernetes.container.hash: cf156875,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1e770c29737abd21ce7fe460b52099b3bccd8c713d8388b26b066ede55b0f0,PodSandboxId:d23e67bc8c58e05c2327c31531171589c51a8cbc6101378a5d6fd2c814da8fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074711404888376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377ba06daedd0f310648fd202080c5ad,},Annotations:map[string]string{io.kubernetes.container.hash: e362d8d5,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad5a93d715454918acc4ac3809aaabff0a639f14a5fe0edb71a887aa6ce8484,PodSandboxId:187a69fd32e5e32c3214b35be78c027b68af9faefa128aa47780aba836c0c3ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074702065391814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a06014053eab2964771cd1c5d47498b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78b896e89cfb106a637397535c2eef7c9823d3288b40b62d4c6fb95ac19098,PodSandboxId:40144175ea4cbf1f7c395c6e788993f36a065e98915b2344d22454397e290344,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074702027827466,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377ba06daedd0f310648fd202080c5ad,},Annotations:map[string]string{io.kubernetes.container.hash: e362d8d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b455ed137fa261bedd91eceda80ea86f59536732770766b6b768b8a244984,PodSandboxId:848942a31f3d133195a905bdeb470b520115034f9314118d9851d927c0282e7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074701956270893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16de5f056cc153880a837eaf1c32aebc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63869f74ffd342f740fa94b0bce91da828725facfe250bae05c68876de046999,PodSandboxId:779c96b46d0005e13ef46b3446bd3867ff032512f22925c237052709c717f3f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074701903696042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a805e19323b97e84a4270bfb522c9c30,},Annotations:map[string]string{io.kubernetes.container.hash: cf156875,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f087cdc2-cd05-4c22-924e-f52270eb8587 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.121187986Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a9b79de-2bf3-4c70-981d-c973b03eb202 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.121428101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a9b79de-2bf3-4c70-981d-c973b03eb202 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.122918049Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e056c7c0-da3c-449c-bebb-63cfad8b506f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.123397337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714074719123370895,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e056c7c0-da3c-449c-bebb-63cfad8b506f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.123934331Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=545ebdda-76d3-4a36-b281-7301c8f1848d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.124017137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=545ebdda-76d3-4a36-b281-7301c8f1848d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.124396870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3c42c7606618595cee484a4a5496def2b095e18d09c3a0e9c4a26a8b05942d0,PodSandboxId:56588a14dd7d55b8ff2b9d3c52ec1edfd702a9308f3a606a055bdc4149d96198,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074711495892862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a06014053eab2964771cd1c5d47498b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963182278c07a97891020748dcd4d9535aee0ebea5ab9ad79cecab3bd6d6748b,PodSandboxId:addd3ddaffe01d670484d596cff9f6615472a3a57b4d71e07eca5bf8be592566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074711478139809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16de5f056cc153880a837eaf1c32aebc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2beb5550dfe950e0097c6a32952bdbd450b6fdb5c6cabf03f3745131c2f24a05,PodSandboxId:b9d2c0a41690285a46ea865b2fe19027f84cf44b7d327a8d7d2d451661a8bc8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074711456459710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a805e19323b97e84a4270bfb522c9c30,},Annotations:map[string]string{io.kubernetes.container.hash: cf156875,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1e770c29737abd21ce7fe460b52099b3bccd8c713d8388b26b066ede55b0f0,PodSandboxId:d23e67bc8c58e05c2327c31531171589c51a8cbc6101378a5d6fd2c814da8fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074711404888376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377ba06daedd0f310648fd202080c5ad,},Annotations:map[string]string{io.kubernetes.container.hash: e362d8d5,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad5a93d715454918acc4ac3809aaabff0a639f14a5fe0edb71a887aa6ce8484,PodSandboxId:187a69fd32e5e32c3214b35be78c027b68af9faefa128aa47780aba836c0c3ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074702065391814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a06014053eab2964771cd1c5d47498b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78b896e89cfb106a637397535c2eef7c9823d3288b40b62d4c6fb95ac19098,PodSandboxId:40144175ea4cbf1f7c395c6e788993f36a065e98915b2344d22454397e290344,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074702027827466,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377ba06daedd0f310648fd202080c5ad,},Annotations:map[string]string{io.kubernetes.container.hash: e362d8d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b455ed137fa261bedd91eceda80ea86f59536732770766b6b768b8a244984,PodSandboxId:848942a31f3d133195a905bdeb470b520115034f9314118d9851d927c0282e7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074701956270893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16de5f056cc153880a837eaf1c32aebc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63869f74ffd342f740fa94b0bce91da828725facfe250bae05c68876de046999,PodSandboxId:779c96b46d0005e13ef46b3446bd3867ff032512f22925c237052709c717f3f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074701903696042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a805e19323b97e84a4270bfb522c9c30,},Annotations:map[string]string{io.kubernetes.container.hash: cf156875,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=545ebdda-76d3-4a36-b281-7301c8f1848d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.176320718Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f3a037f-e97c-4b5b-8a09-7315008c6c13 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.176441012Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f3a037f-e97c-4b5b-8a09-7315008c6c13 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.178418486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe42c161-229a-4665-8a5b-9dc06c833290 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.178822364Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714074719178799217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe42c161-229a-4665-8a5b-9dc06c833290 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.179355028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6df204ef-d0ec-4d6d-afb1-577cd594efb0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.179442712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6df204ef-d0ec-4d6d-afb1-577cd594efb0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:51:59 kubernetes-upgrade-215221 crio[1891]: time="2024-04-25 19:51:59.179692753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c3c42c7606618595cee484a4a5496def2b095e18d09c3a0e9c4a26a8b05942d0,PodSandboxId:56588a14dd7d55b8ff2b9d3c52ec1edfd702a9308f3a606a055bdc4149d96198,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074711495892862,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a06014053eab2964771cd1c5d47498b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963182278c07a97891020748dcd4d9535aee0ebea5ab9ad79cecab3bd6d6748b,PodSandboxId:addd3ddaffe01d670484d596cff9f6615472a3a57b4d71e07eca5bf8be592566,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074711478139809,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16de5f056cc153880a837eaf1c32aebc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.contai
ner.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2beb5550dfe950e0097c6a32952bdbd450b6fdb5c6cabf03f3745131c2f24a05,PodSandboxId:b9d2c0a41690285a46ea865b2fe19027f84cf44b7d327a8d7d2d451661a8bc8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074711456459710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a805e19323b97e84a4270bfb522c9c30,},Annotations:map[string]string{io.kubernetes.container.hash: cf156875,io.kubernetes.container.r
estartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de1e770c29737abd21ce7fe460b52099b3bccd8c713d8388b26b066ede55b0f0,PodSandboxId:d23e67bc8c58e05c2327c31531171589c51a8cbc6101378a5d6fd2c814da8fcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074711404888376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377ba06daedd0f310648fd202080c5ad,},Annotations:map[string]string{io.kubernetes.container.hash: e362d8d5,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad5a93d715454918acc4ac3809aaabff0a639f14a5fe0edb71a887aa6ce8484,PodSandboxId:187a69fd32e5e32c3214b35be78c027b68af9faefa128aa47780aba836c0c3ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074702065391814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a06014053eab2964771cd1c5d47498b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b78b896e89cfb106a637397535c2eef7c9823d3288b40b62d4c6fb95ac19098,PodSandboxId:40144175ea4cbf1f7c395c6e788993f36a065e98915b2344d22454397e290344,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074702027827466,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 377ba06daedd0f310648fd202080c5ad,},Annotations:map[string]string{io.kubernetes.container.hash: e362d8d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91b455ed137fa261bedd91eceda80ea86f59536732770766b6b768b8a244984,PodSandboxId:848942a31f3d133195a905bdeb470b520115034f9314118d9851d927c0282e7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074701956270893,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16de5f056cc153880a837eaf1c32aebc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63869f74ffd342f740fa94b0bce91da828725facfe250bae05c68876de046999,PodSandboxId:779c96b46d0005e13ef46b3446bd3867ff032512f22925c237052709c717f3f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074701903696042,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-215221,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a805e19323b97e84a4270bfb522c9c30,},Annotations:map[string]string{io.kubernetes.container.hash: cf156875,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6df204ef-d0ec-4d6d-afb1-577cd594efb0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c3c42c7606618       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   7 seconds ago       Running             kube-scheduler            2                   56588a14dd7d5       kube-scheduler-kubernetes-upgrade-215221
	963182278c07a       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   7 seconds ago       Running             kube-controller-manager   2                   addd3ddaffe01       kube-controller-manager-kubernetes-upgrade-215221
	2beb5550dfe95       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   7 seconds ago       Running             kube-apiserver            2                   b9d2c0a416902       kube-apiserver-kubernetes-upgrade-215221
	de1e770c29737       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 seconds ago       Running             etcd                      2                   d23e67bc8c58e       etcd-kubernetes-upgrade-215221
	0ad5a93d71545       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   17 seconds ago      Exited              kube-scheduler            1                   187a69fd32e5e       kube-scheduler-kubernetes-upgrade-215221
	6b78b896e89cf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   17 seconds ago      Exited              etcd                      1                   40144175ea4cb       etcd-kubernetes-upgrade-215221
	b91b455ed137f       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   17 seconds ago      Exited              kube-controller-manager   1                   848942a31f3d1       kube-controller-manager-kubernetes-upgrade-215221
	63869f74ffd34       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   17 seconds ago      Exited              kube-apiserver            1                   779c96b46d000       kube-apiserver-kubernetes-upgrade-215221
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-215221
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-215221
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:51:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-215221
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:51:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:51:55 +0000   Thu, 25 Apr 2024 19:51:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:51:55 +0000   Thu, 25 Apr 2024 19:51:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:51:55 +0000   Thu, 25 Apr 2024 19:51:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:51:55 +0000   Thu, 25 Apr 2024 19:51:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.198
	  Hostname:    kubernetes-upgrade-215221
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 49c98861af764529801481bc9588aab5
	  System UUID:                49c98861-af76-4529-8014-81bc9588aab5
	  Boot ID:                    b38e65fd-0b38-493a-83a5-4e7c1de0a131
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-215221                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         30s
	  kube-system                 kube-apiserver-kubernetes-upgrade-215221             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-215221    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-scheduler-kubernetes-upgrade-215221             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  36s (x8 over 37s)  kubelet  Node kubernetes-upgrade-215221 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 37s)  kubelet  Node kubernetes-upgrade-215221 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x7 over 37s)  kubelet  Node kubernetes-upgrade-215221 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  9s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8s (x8 over 9s)    kubelet  Node kubernetes-upgrade-215221 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 9s)    kubelet  Node kubernetes-upgrade-215221 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 9s)    kubelet  Node kubernetes-upgrade-215221 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Apr25 19:51] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.320779] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.062005] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072219] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.206234] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.155953] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.337093] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +5.695808] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +0.077026] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.284262] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[  +8.185302] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	[  +0.115679] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.698859] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.368393] systemd-fstab-generator[1812]: Ignoring "noauto" option for root device
	[  +0.266314] systemd-fstab-generator[1824]: Ignoring "noauto" option for root device
	[  +0.277455] systemd-fstab-generator[1838]: Ignoring "noauto" option for root device
	[  +0.260704] systemd-fstab-generator[1850]: Ignoring "noauto" option for root device
	[  +0.482563] systemd-fstab-generator[1878]: Ignoring "noauto" option for root device
	[  +2.948962] systemd-fstab-generator[2070]: Ignoring "noauto" option for root device
	[  +0.071124] kauditd_printk_skb: 140 callbacks suppressed
	[  +3.122789] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +6.509285] systemd-fstab-generator[2603]: Ignoring "noauto" option for root device
	[  +0.132621] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [6b78b896e89cfb106a637397535c2eef7c9823d3288b40b62d4c6fb95ac19098] <==
	{"level":"info","ts":"2024-04-25T19:51:42.781406Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"59.065138ms"}
	{"level":"info","ts":"2024-04-25T19:51:42.80561Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-25T19:51:42.865898Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d6c206a88a9f7b0c","local-member-id":"d62ca2e8b6f194b3","commit-index":300}
	{"level":"info","ts":"2024-04-25T19:51:42.886155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d62ca2e8b6f194b3 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-25T19:51:42.889526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d62ca2e8b6f194b3 became follower at term 2"}
	{"level":"info","ts":"2024-04-25T19:51:42.896589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d62ca2e8b6f194b3 [peers: [], term: 2, commit: 300, applied: 0, lastindex: 300, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-25T19:51:42.900736Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-25T19:51:42.946282Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":293}
	{"level":"info","ts":"2024-04-25T19:51:42.981698Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-25T19:51:42.990802Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"d62ca2e8b6f194b3","timeout":"7s"}
	{"level":"info","ts":"2024-04-25T19:51:42.991081Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"d62ca2e8b6f194b3"}
	{"level":"info","ts":"2024-04-25T19:51:42.991171Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"d62ca2e8b6f194b3","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-25T19:51:42.992448Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-25T19:51:42.992617Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-25T19:51:42.992703Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-25T19:51:42.992718Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-25T19:51:42.992941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d62ca2e8b6f194b3 switched to configuration voters=(15432889143477245107)"}
	{"level":"info","ts":"2024-04-25T19:51:42.993054Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d6c206a88a9f7b0c","local-member-id":"d62ca2e8b6f194b3","added-peer-id":"d62ca2e8b6f194b3","added-peer-peer-urls":["https://192.168.61.198:2380"]}
	{"level":"info","ts":"2024-04-25T19:51:42.993261Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d6c206a88a9f7b0c","local-member-id":"d62ca2e8b6f194b3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:51:42.993332Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:51:42.999394Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-25T19:51:42.999697Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.198:2380"}
	{"level":"info","ts":"2024-04-25T19:51:42.999875Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.198:2380"}
	{"level":"info","ts":"2024-04-25T19:51:43.001964Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d62ca2e8b6f194b3","initial-advertise-peer-urls":["https://192.168.61.198:2380"],"listen-peer-urls":["https://192.168.61.198:2380"],"advertise-client-urls":["https://192.168.61.198:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.198:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-25T19:51:43.002043Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [de1e770c29737abd21ce7fe460b52099b3bccd8c713d8388b26b066ede55b0f0] <==
	{"level":"info","ts":"2024-04-25T19:51:51.954752Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-25T19:51:51.954781Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-25T19:51:51.960526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d62ca2e8b6f194b3 switched to configuration voters=(15432889143477245107)"}
	{"level":"info","ts":"2024-04-25T19:51:51.96065Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d6c206a88a9f7b0c","local-member-id":"d62ca2e8b6f194b3","added-peer-id":"d62ca2e8b6f194b3","added-peer-peer-urls":["https://192.168.61.198:2380"]}
	{"level":"info","ts":"2024-04-25T19:51:51.960792Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d6c206a88a9f7b0c","local-member-id":"d62ca2e8b6f194b3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:51:51.960895Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:51:51.967451Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-25T19:51:51.978065Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d62ca2e8b6f194b3","initial-advertise-peer-urls":["https://192.168.61.198:2380"],"listen-peer-urls":["https://192.168.61.198:2380"],"advertise-client-urls":["https://192.168.61.198:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.198:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-25T19:51:51.97851Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-25T19:51:51.972765Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.198:2380"}
	{"level":"info","ts":"2024-04-25T19:51:51.982298Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.198:2380"}
	{"level":"info","ts":"2024-04-25T19:51:53.497384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d62ca2e8b6f194b3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-25T19:51:53.497504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d62ca2e8b6f194b3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-25T19:51:53.49757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d62ca2e8b6f194b3 received MsgPreVoteResp from d62ca2e8b6f194b3 at term 2"}
	{"level":"info","ts":"2024-04-25T19:51:53.497612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d62ca2e8b6f194b3 became candidate at term 3"}
	{"level":"info","ts":"2024-04-25T19:51:53.49765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d62ca2e8b6f194b3 received MsgVoteResp from d62ca2e8b6f194b3 at term 3"}
	{"level":"info","ts":"2024-04-25T19:51:53.497684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d62ca2e8b6f194b3 became leader at term 3"}
	{"level":"info","ts":"2024-04-25T19:51:53.497724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d62ca2e8b6f194b3 elected leader d62ca2e8b6f194b3 at term 3"}
	{"level":"info","ts":"2024-04-25T19:51:53.717921Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d62ca2e8b6f194b3","local-member-attributes":"{Name:kubernetes-upgrade-215221 ClientURLs:[https://192.168.61.198:2379]}","request-path":"/0/members/d62ca2e8b6f194b3/attributes","cluster-id":"d6c206a88a9f7b0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T19:51:53.717946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:51:53.717975Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:51:53.718357Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T19:51:53.719012Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T19:51:53.721697Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.198:2379"}
	{"level":"info","ts":"2024-04-25T19:51:53.725701Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:51:59 up 1 min,  0 users,  load average: 2.15, 0.56, 0.19
	Linux kubernetes-upgrade-215221 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2beb5550dfe950e0097c6a32952bdbd450b6fdb5c6cabf03f3745131c2f24a05] <==
	I0425 19:51:55.219923       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0425 19:51:55.220126       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0425 19:51:55.378412       1 shared_informer.go:320] Caches are synced for configmaps
	I0425 19:51:55.383486       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0425 19:51:55.387528       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0425 19:51:55.388083       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0425 19:51:55.388130       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0425 19:51:55.388694       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0425 19:51:55.389083       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0425 19:51:55.402429       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0425 19:51:55.402497       1 policy_source.go:224] refreshing policies
	I0425 19:51:55.404670       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0425 19:51:55.408346       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0425 19:51:55.414142       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0425 19:51:55.414376       1 aggregator.go:165] initial CRD sync complete...
	I0425 19:51:55.414419       1 autoregister_controller.go:141] Starting autoregister controller
	I0425 19:51:55.414448       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0425 19:51:55.414477       1 cache.go:39] Caches are synced for autoregister controller
	E0425 19:51:55.419254       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0425 19:51:56.192594       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0425 19:51:56.666419       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0425 19:51:56.683512       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0425 19:51:56.724677       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0425 19:51:56.840351       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0425 19:51:56.848621       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [63869f74ffd342f740fa94b0bce91da828725facfe250bae05c68876de046999] <==
	I0425 19:51:42.687744       1 options.go:221] external host was not specified, using 192.168.61.198
	I0425 19:51:42.690394       1 server.go:148] Version: v1.30.0
	I0425 19:51:42.690693       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:51:43.699339       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0425 19:51:43.704968       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0425 19:51:43.705030       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0425 19:51:43.705309       1 instance.go:299] Using reconciler: lease
	I0425 19:51:43.706361       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0425 19:51:44.464754       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:56588->127.0.0.1:2379: read: connection reset by peer"
	W0425 19:51:44.466525       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:56558->127.0.0.1:2379: read: connection reset by peer"
	W0425 19:51:44.470712       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:56572->127.0.0.1:2379: read: connection reset by peer"
	
	
	==> kube-controller-manager [963182278c07a97891020748dcd4d9535aee0ebea5ab9ad79cecab3bd6d6748b] <==
	I0425 19:51:57.317077       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0425 19:51:57.317339       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0425 19:51:57.317353       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0425 19:51:57.317500       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	E0425 19:51:57.320115       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0425 19:51:57.320157       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0425 19:51:57.324380       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0425 19:51:57.324422       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0425 19:51:57.324519       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0425 19:51:57.324552       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0425 19:51:57.324572       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0425 19:51:57.345434       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0425 19:51:57.345589       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0425 19:51:57.345598       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0425 19:51:57.347455       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0425 19:51:57.347722       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0425 19:51:57.347771       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0425 19:51:57.347780       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0425 19:51:57.350905       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0425 19:51:57.351126       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0425 19:51:57.351143       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0425 19:51:57.354048       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0425 19:51:57.354283       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0425 19:51:57.354292       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0425 19:51:57.371147       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-controller-manager [b91b455ed137fa261bedd91eceda80ea86f59536732770766b6b768b8a244984] <==
	I0425 19:51:44.208519       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [0ad5a93d715454918acc4ac3809aaabff0a639f14a5fe0edb71a887aa6ce8484] <==
	I0425 19:51:44.631636       1 serving.go:380] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [c3c42c7606618595cee484a4a5496def2b095e18d09c3a0e9c4a26a8b05942d0] <==
	I0425 19:51:52.961727       1 serving.go:380] Generated self-signed cert in-memory
	W0425 19:51:55.379772       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0425 19:51:55.379864       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 19:51:55.379908       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0425 19:51:55.379939       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0425 19:51:55.411885       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0425 19:51:55.414441       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:51:55.418731       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0425 19:51:55.418878       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0425 19:51:55.420643       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0425 19:51:55.420790       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0425 19:51:55.519794       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.067868    2347 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40144175ea4cbf1f7c395c6e788993f36a065e98915b2344d22454397e290344"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.067878    2347 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcf77f7f642bbef04c3be2860304d7fffd9778e27993cf1c9f6c7baeda0bd87"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.134371    2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a06014053eab2964771cd1c5d47498b-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-215221\" (UID: \"1a06014053eab2964771cd1c5d47498b\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.134447    2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a805e19323b97e84a4270bfb522c9c30-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-215221\" (UID: \"a805e19323b97e84a4270bfb522c9c30\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.134485    2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a805e19323b97e84a4270bfb522c9c30-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-215221\" (UID: \"a805e19323b97e84a4270bfb522c9c30\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.134519    2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/16de5f056cc153880a837eaf1c32aebc-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-215221\" (UID: \"16de5f056cc153880a837eaf1c32aebc\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.134561    2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16de5f056cc153880a837eaf1c32aebc-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-215221\" (UID: \"16de5f056cc153880a837eaf1c32aebc\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.134603    2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16de5f056cc153880a837eaf1c32aebc-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-215221\" (UID: \"16de5f056cc153880a837eaf1c32aebc\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.134754    2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/377ba06daedd0f310648fd202080c5ad-etcd-data\") pod \"etcd-kubernetes-upgrade-215221\" (UID: \"377ba06daedd0f310648fd202080c5ad\") " pod="kube-system/etcd-kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.134783    2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a805e19323b97e84a4270bfb522c9c30-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-215221\" (UID: \"a805e19323b97e84a4270bfb522c9c30\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.134810    2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16de5f056cc153880a837eaf1c32aebc-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-215221\" (UID: \"16de5f056cc153880a837eaf1c32aebc\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.134844    2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16de5f056cc153880a837eaf1c32aebc-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-215221\" (UID: \"16de5f056cc153880a837eaf1c32aebc\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.134879    2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/377ba06daedd0f310648fd202080c5ad-etcd-certs\") pod \"etcd-kubernetes-upgrade-215221\" (UID: \"377ba06daedd0f310648fd202080c5ad\") " pod="kube-system/etcd-kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: E0425 19:51:51.342028    2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-215221?timeout=10s\": dial tcp 192.168.61.198:8443: connect: connection refused" interval="800ms"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.366451    2347 scope.go:117] "RemoveContainer" containerID="6b78b896e89cfb106a637397535c2eef7c9823d3288b40b62d4c6fb95ac19098"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.367595    2347 scope.go:117] "RemoveContainer" containerID="63869f74ffd342f740fa94b0bce91da828725facfe250bae05c68876de046999"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.368939    2347 scope.go:117] "RemoveContainer" containerID="b91b455ed137fa261bedd91eceda80ea86f59536732770766b6b768b8a244984"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.370746    2347 scope.go:117] "RemoveContainer" containerID="0ad5a93d715454918acc4ac3809aaabff0a639f14a5fe0edb71a887aa6ce8484"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:51.443722    2347 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-215221"
	Apr 25 19:51:51 kubernetes-upgrade-215221 kubelet[2347]: E0425 19:51:51.445085    2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.198:8443: connect: connection refused" node="kubernetes-upgrade-215221"
	Apr 25 19:51:52 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:52.246960    2347 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-215221"
	Apr 25 19:51:55 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:55.454492    2347 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-215221"
	Apr 25 19:51:55 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:55.454919    2347 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-215221"
	Apr 25 19:51:55 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:55.722300    2347 apiserver.go:52] "Watching apiserver"
	Apr 25 19:51:55 kubernetes-upgrade-215221 kubelet[2347]: I0425 19:51:55.733463    2347 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:51:58.598834   63293 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18757-6355/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-215221 -n kubernetes-upgrade-215221
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-215221 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-215221 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-215221 describe pod storage-provisioner: exit status 1 (79.900166ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-215221 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-215221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-215221
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-215221: (1.274873739s)
--- FAIL: TestKubernetesUpgrade (377.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (78.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-762664 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-762664 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.017266866s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-762664] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18757
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-762664" primary control-plane node in "pause-762664" cluster
	* Updating the running kvm2 "pause-762664" VM ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-762664" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 19:44:15.858391   52810 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:44:15.858528   52810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:44:15.858539   52810 out.go:304] Setting ErrFile to fd 2...
	I0425 19:44:15.858545   52810 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:44:15.858849   52810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:44:15.859590   52810 out.go:298] Setting JSON to false
	I0425 19:44:15.860675   52810 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5202,"bootTime":1714069054,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:44:15.860754   52810 start.go:139] virtualization: kvm guest
	I0425 19:44:15.952881   52810 out.go:177] * [pause-762664] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:44:16.079782   52810 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:44:16.079782   52810 notify.go:220] Checking for updates...
	I0425 19:44:16.210071   52810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:44:16.211716   52810 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:44:16.212980   52810 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:44:16.214191   52810 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:44:16.215459   52810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:44:16.217200   52810 config.go:182] Loaded profile config "pause-762664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:44:16.217904   52810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:44:16.217961   52810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:44:16.234372   52810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
	I0425 19:44:16.234852   52810 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:44:16.235440   52810 main.go:141] libmachine: Using API Version  1
	I0425 19:44:16.235460   52810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:44:16.235853   52810 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:44:16.236031   52810 main.go:141] libmachine: (pause-762664) Calling .DriverName
	I0425 19:44:16.236284   52810 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:44:16.236604   52810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:44:16.236678   52810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:44:16.251824   52810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38157
	I0425 19:44:16.252345   52810 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:44:16.252861   52810 main.go:141] libmachine: Using API Version  1
	I0425 19:44:16.252890   52810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:44:16.253200   52810 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:44:16.253401   52810 main.go:141] libmachine: (pause-762664) Calling .DriverName
	I0425 19:44:16.292166   52810 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:44:16.293951   52810 start.go:297] selected driver: kvm2
	I0425 19:44:16.293975   52810 start.go:901] validating driver "kvm2" against &{Name:pause-762664 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.0 ClusterName:pause-762664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:44:16.294163   52810 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:44:16.294663   52810 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:44:16.294766   52810 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:44:16.310821   52810 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:44:16.311654   52810 cni.go:84] Creating CNI manager for ""
	I0425 19:44:16.311670   52810 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:44:16.311740   52810 start.go:340] cluster config:
	{Name:pause-762664 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-762664 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:44:16.312249   52810 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:44:16.314558   52810 out.go:177] * Starting "pause-762664" primary control-plane node in "pause-762664" cluster
	I0425 19:44:16.315966   52810 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:44:16.316012   52810 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:44:16.316028   52810 cache.go:56] Caching tarball of preloaded images
	I0425 19:44:16.316132   52810 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:44:16.316146   52810 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 19:44:16.316305   52810 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/pause-762664/config.json ...
	I0425 19:44:16.316544   52810 start.go:360] acquireMachinesLock for pause-762664: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:44:35.095905   52810 start.go:364] duration metric: took 18.779329727s to acquireMachinesLock for "pause-762664"
	I0425 19:44:35.095971   52810 start.go:96] Skipping create...Using existing machine configuration
	I0425 19:44:35.095987   52810 fix.go:54] fixHost starting: 
	I0425 19:44:35.096448   52810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:44:35.096494   52810 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:44:35.113795   52810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0425 19:44:35.114238   52810 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:44:35.114748   52810 main.go:141] libmachine: Using API Version  1
	I0425 19:44:35.114774   52810 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:44:35.115096   52810 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:44:35.115312   52810 main.go:141] libmachine: (pause-762664) Calling .DriverName
	I0425 19:44:35.115479   52810 main.go:141] libmachine: (pause-762664) Calling .GetState
	I0425 19:44:35.116963   52810 fix.go:112] recreateIfNeeded on pause-762664: state=Running err=<nil>
	W0425 19:44:35.117002   52810 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 19:44:35.119481   52810 out.go:177] * Updating the running kvm2 "pause-762664" VM ...
	I0425 19:44:35.121070   52810 machine.go:94] provisionDockerMachine start ...
	I0425 19:44:35.121091   52810 main.go:141] libmachine: (pause-762664) Calling .DriverName
	I0425 19:44:35.121286   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHHostname
	I0425 19:44:35.123986   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.124448   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:35.124486   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.124623   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHPort
	I0425 19:44:35.124810   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:35.124949   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:35.125087   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHUsername
	I0425 19:44:35.125268   52810 main.go:141] libmachine: Using SSH client type: native
	I0425 19:44:35.125489   52810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I0425 19:44:35.125510   52810 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 19:44:35.240114   52810 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-762664
	
	I0425 19:44:35.240142   52810 main.go:141] libmachine: (pause-762664) Calling .GetMachineName
	I0425 19:44:35.240398   52810 buildroot.go:166] provisioning hostname "pause-762664"
	I0425 19:44:35.240427   52810 main.go:141] libmachine: (pause-762664) Calling .GetMachineName
	I0425 19:44:35.240620   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHHostname
	I0425 19:44:35.243710   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.244170   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:35.244203   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.244377   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHPort
	I0425 19:44:35.244567   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:35.244772   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:35.244954   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHUsername
	I0425 19:44:35.245109   52810 main.go:141] libmachine: Using SSH client type: native
	I0425 19:44:35.245274   52810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I0425 19:44:35.245292   52810 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-762664 && echo "pause-762664" | sudo tee /etc/hostname
	I0425 19:44:35.381188   52810 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-762664
	
	I0425 19:44:35.381214   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHHostname
	I0425 19:44:35.384033   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.384378   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:35.384405   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.384529   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHPort
	I0425 19:44:35.384745   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:35.384894   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:35.385062   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHUsername
	I0425 19:44:35.385204   52810 main.go:141] libmachine: Using SSH client type: native
	I0425 19:44:35.385441   52810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I0425 19:44:35.385461   52810 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-762664' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-762664/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-762664' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 19:44:35.496412   52810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:44:35.496438   52810 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 19:44:35.496492   52810 buildroot.go:174] setting up certificates
	I0425 19:44:35.496505   52810 provision.go:84] configureAuth start
	I0425 19:44:35.496528   52810 main.go:141] libmachine: (pause-762664) Calling .GetMachineName
	I0425 19:44:35.496835   52810 main.go:141] libmachine: (pause-762664) Calling .GetIP
	I0425 19:44:35.500041   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.500455   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:35.500487   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.500683   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHHostname
	I0425 19:44:35.503359   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.503732   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:35.503770   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.503922   52810 provision.go:143] copyHostCerts
	I0425 19:44:35.503984   52810 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 19:44:35.503997   52810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:44:35.504067   52810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 19:44:35.504194   52810 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 19:44:35.504204   52810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:44:35.504237   52810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 19:44:35.504343   52810 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 19:44:35.504351   52810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:44:35.504370   52810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 19:44:35.504431   52810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.pause-762664 san=[127.0.0.1 192.168.61.146 localhost minikube pause-762664]
	I0425 19:44:35.575753   52810 provision.go:177] copyRemoteCerts
	I0425 19:44:35.575815   52810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 19:44:35.575856   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHHostname
	I0425 19:44:35.578608   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.578915   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:35.578939   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.579178   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHPort
	I0425 19:44:35.579339   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:35.579512   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHUsername
	I0425 19:44:35.579643   52810 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/pause-762664/id_rsa Username:docker}
	I0425 19:44:35.674671   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 19:44:35.714041   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0425 19:44:35.742703   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 19:44:35.781059   52810 provision.go:87] duration metric: took 284.5395ms to configureAuth
	I0425 19:44:35.781087   52810 buildroot.go:189] setting minikube options for container-runtime
	I0425 19:44:35.781309   52810 config.go:182] Loaded profile config "pause-762664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:44:35.781422   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHHostname
	I0425 19:44:35.784483   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.785001   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:35.785029   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:35.785268   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHPort
	I0425 19:44:35.785537   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:35.785743   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:35.785920   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHUsername
	I0425 19:44:35.786127   52810 main.go:141] libmachine: Using SSH client type: native
	I0425 19:44:35.786375   52810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I0425 19:44:35.786403   52810 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 19:44:41.636257   52810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 19:44:41.636290   52810 machine.go:97] duration metric: took 6.515207111s to provisionDockerMachine
	I0425 19:44:41.636304   52810 start.go:293] postStartSetup for "pause-762664" (driver="kvm2")
	I0425 19:44:41.636317   52810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 19:44:41.636338   52810 main.go:141] libmachine: (pause-762664) Calling .DriverName
	I0425 19:44:41.636668   52810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 19:44:41.636709   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHHostname
	I0425 19:44:41.639628   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:41.639961   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:41.639992   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:41.640152   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHPort
	I0425 19:44:41.640338   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:41.640482   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHUsername
	I0425 19:44:41.640612   52810 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/pause-762664/id_rsa Username:docker}
	I0425 19:44:41.735590   52810 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 19:44:41.740911   52810 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 19:44:41.740941   52810 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 19:44:41.741014   52810 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 19:44:41.741108   52810 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 19:44:41.741222   52810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 19:44:41.755795   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:44:41.794058   52810 start.go:296] duration metric: took 157.735807ms for postStartSetup
	I0425 19:44:41.794108   52810 fix.go:56] duration metric: took 6.698120417s for fixHost
	I0425 19:44:41.794134   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHHostname
	I0425 19:44:41.797118   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:41.797507   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:41.797545   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:41.797730   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHPort
	I0425 19:44:41.797958   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:41.798160   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:41.798368   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHUsername
	I0425 19:44:41.798550   52810 main.go:141] libmachine: Using SSH client type: native
	I0425 19:44:41.798770   52810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.146 22 <nil> <nil>}
	I0425 19:44:41.798784   52810 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0425 19:44:41.911963   52810 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714074281.906503033
	
	I0425 19:44:41.911986   52810 fix.go:216] guest clock: 1714074281.906503033
	I0425 19:44:41.911994   52810 fix.go:229] Guest: 2024-04-25 19:44:41.906503033 +0000 UTC Remote: 2024-04-25 19:44:41.794113783 +0000 UTC m=+25.991954498 (delta=112.38925ms)
	I0425 19:44:41.912029   52810 fix.go:200] guest clock delta is within tolerance: 112.38925ms
	I0425 19:44:41.912033   52810 start.go:83] releasing machines lock for "pause-762664", held for 6.816087721s
	I0425 19:44:41.912061   52810 main.go:141] libmachine: (pause-762664) Calling .DriverName
	I0425 19:44:41.912381   52810 main.go:141] libmachine: (pause-762664) Calling .GetIP
	I0425 19:44:41.915941   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:41.916326   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:41.916349   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:41.916622   52810 main.go:141] libmachine: (pause-762664) Calling .DriverName
	I0425 19:44:41.917316   52810 main.go:141] libmachine: (pause-762664) Calling .DriverName
	I0425 19:44:41.917528   52810 main.go:141] libmachine: (pause-762664) Calling .DriverName
	I0425 19:44:41.917620   52810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 19:44:41.917658   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHHostname
	I0425 19:44:41.917867   52810 ssh_runner.go:195] Run: cat /version.json
	I0425 19:44:41.917894   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHHostname
	I0425 19:44:41.920723   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:41.921183   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:41.921350   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:41.921376   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:41.921402   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:41.921416   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:41.921492   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHPort
	I0425 19:44:41.921686   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHPort
	I0425 19:44:41.921725   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:41.921848   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHUsername
	I0425 19:44:41.921898   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHKeyPath
	I0425 19:44:41.921964   52810 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/pause-762664/id_rsa Username:docker}
	I0425 19:44:41.922302   52810 main.go:141] libmachine: (pause-762664) Calling .GetSSHUsername
	I0425 19:44:41.922449   52810 sshutil.go:53] new ssh client: &{IP:192.168.61.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/pause-762664/id_rsa Username:docker}
	I0425 19:44:42.031804   52810 ssh_runner.go:195] Run: systemctl --version
	I0425 19:44:42.038758   52810 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 19:44:42.205929   52810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 19:44:42.216719   52810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 19:44:42.216793   52810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 19:44:42.228048   52810 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0425 19:44:42.228074   52810 start.go:494] detecting cgroup driver to use...
	I0425 19:44:42.228144   52810 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 19:44:42.253447   52810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 19:44:42.273857   52810 docker.go:217] disabling cri-docker service (if available) ...
	I0425 19:44:42.273926   52810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 19:44:42.290155   52810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 19:44:42.308135   52810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 19:44:42.493999   52810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 19:44:42.670772   52810 docker.go:233] disabling docker service ...
	I0425 19:44:42.670861   52810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 19:44:42.690825   52810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 19:44:42.707973   52810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 19:44:42.880032   52810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 19:44:43.060489   52810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 19:44:43.077659   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 19:44:43.108623   52810 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 19:44:43.108691   52810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:44:43.127688   52810 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 19:44:43.127751   52810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:44:43.145075   52810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:44:43.163200   52810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:44:43.178983   52810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 19:44:43.197825   52810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:44:43.212934   52810 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:44:43.232943   52810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:44:43.247682   52810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 19:44:43.260193   52810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 19:44:43.293362   52810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:44:43.536283   52810 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 19:44:49.392195   52810 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.855878088s)
	I0425 19:44:49.392227   52810 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 19:44:49.392277   52810 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 19:44:49.399416   52810 start.go:562] Will wait 60s for crictl version
	I0425 19:44:49.399479   52810 ssh_runner.go:195] Run: which crictl
	I0425 19:44:49.404764   52810 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 19:44:49.455038   52810 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 19:44:49.455117   52810 ssh_runner.go:195] Run: crio --version
	I0425 19:44:49.490423   52810 ssh_runner.go:195] Run: crio --version
	I0425 19:44:49.533433   52810 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 19:44:49.535003   52810 main.go:141] libmachine: (pause-762664) Calling .GetIP
	I0425 19:44:49.537633   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:49.538026   52810 main.go:141] libmachine: (pause-762664) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:70:ea", ip: ""} in network mk-pause-762664: {Iface:virbr1 ExpiryTime:2024-04-25 20:43:32 +0000 UTC Type:0 Mac:52:54:00:a8:70:ea Iaid: IPaddr:192.168.61.146 Prefix:24 Hostname:pause-762664 Clientid:01:52:54:00:a8:70:ea}
	I0425 19:44:49.538056   52810 main.go:141] libmachine: (pause-762664) DBG | domain pause-762664 has defined IP address 192.168.61.146 and MAC address 52:54:00:a8:70:ea in network mk-pause-762664
	I0425 19:44:49.538275   52810 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 19:44:49.543625   52810 kubeadm.go:877] updating cluster {Name:pause-762664 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:pause-762664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 19:44:49.543802   52810 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:44:49.543848   52810 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:44:49.599407   52810 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:44:49.599433   52810 crio.go:433] Images already preloaded, skipping extraction
	I0425 19:44:49.599491   52810 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:44:49.641507   52810 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:44:49.641534   52810 cache_images.go:84] Images are preloaded, skipping loading
	I0425 19:44:49.641544   52810 kubeadm.go:928] updating node { 192.168.61.146 8443 v1.30.0 crio true true} ...
	I0425 19:44:49.641664   52810 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-762664 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.146
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-762664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 19:44:49.641748   52810 ssh_runner.go:195] Run: crio config
	I0425 19:44:49.706767   52810 cni.go:84] Creating CNI manager for ""
	I0425 19:44:49.706789   52810 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:44:49.706800   52810 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 19:44:49.706819   52810 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.146 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-762664 NodeName:pause-762664 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.146"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.146 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 19:44:49.706939   52810 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.146
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-762664"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.146
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.146"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 19:44:49.706995   52810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 19:44:49.719304   52810 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 19:44:49.719381   52810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 19:44:49.730575   52810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0425 19:44:49.753420   52810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 19:44:49.775475   52810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0425 19:44:49.796240   52810 ssh_runner.go:195] Run: grep 192.168.61.146	control-plane.minikube.internal$ /etc/hosts
	I0425 19:44:49.801720   52810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:44:49.935683   52810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:44:49.952027   52810 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/pause-762664 for IP: 192.168.61.146
	I0425 19:44:49.952051   52810 certs.go:194] generating shared ca certs ...
	I0425 19:44:49.952070   52810 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:44:49.952236   52810 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 19:44:49.952275   52810 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 19:44:49.952285   52810 certs.go:256] generating profile certs ...
	I0425 19:44:49.952371   52810 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/pause-762664/client.key
	I0425 19:44:49.952462   52810 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/pause-762664/apiserver.key.869d5144
	I0425 19:44:49.952518   52810 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/pause-762664/proxy-client.key
	I0425 19:44:49.952628   52810 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 19:44:49.952660   52810 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 19:44:49.952666   52810 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 19:44:49.952685   52810 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 19:44:49.952703   52810 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 19:44:49.952723   52810 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 19:44:49.952758   52810 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:44:49.953307   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 19:44:49.989430   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 19:44:50.018852   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 19:44:50.047673   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 19:44:50.077193   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/pause-762664/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0425 19:44:50.107026   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/pause-762664/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 19:44:50.135887   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/pause-762664/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 19:44:50.168997   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/pause-762664/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0425 19:44:50.198260   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 19:44:50.231632   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 19:44:50.264819   52810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 19:44:50.295124   52810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 19:44:50.315114   52810 ssh_runner.go:195] Run: openssl version
	I0425 19:44:50.322841   52810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 19:44:50.337392   52810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:44:50.343670   52810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:44:50.343741   52810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:44:50.353011   52810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 19:44:50.366529   52810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 19:44:50.381645   52810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 19:44:50.387764   52810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 19:44:50.387830   52810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 19:44:50.395395   52810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 19:44:50.434757   52810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 19:44:50.470119   52810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 19:44:50.494277   52810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:44:50.494347   52810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 19:44:50.516120   52810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 19:44:50.560675   52810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 19:44:50.571054   52810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 19:44:50.588421   52810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 19:44:50.599608   52810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 19:44:50.625234   52810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 19:44:50.649040   52810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 19:44:50.693676   52810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 19:44:50.733571   52810 kubeadm.go:391] StartCluster: {Name:pause-762664 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-762664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:44:50.733747   52810 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 19:44:50.733842   52810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 19:44:50.897868   52810 cri.go:89] found id: "537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb"
	I0425 19:44:50.897895   52810 cri.go:89] found id: "ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0"
	I0425 19:44:50.897899   52810 cri.go:89] found id: "bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24"
	I0425 19:44:50.897902   52810 cri.go:89] found id: "e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4"
	I0425 19:44:50.897905   52810 cri.go:89] found id: "18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b"
	I0425 19:44:50.897908   52810 cri.go:89] found id: "531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab"
	I0425 19:44:50.897917   52810 cri.go:89] found id: "c4066d8d6033ed321eb01d5a08d8b9a6c32eee002a442d7a0b7fad50a5aae689"
	I0425 19:44:50.897920   52810 cri.go:89] found id: "28e68767d7b7898448d0882481acf39693721439e1aa0dcfd4f5447af85516ad"
	I0425 19:44:50.897922   52810 cri.go:89] found id: ""
	I0425 19:44:50.897964   52810 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-762664 -n pause-762664
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-762664 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-762664 logs -n 25: (2.463106224s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo cat                            | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo cat                            | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo cat                            | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo cat                            | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo find                           | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo crio                           | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-120641                                     | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC | 25 Apr 24 19:42 UTC |
	| start   | -p NoKubernetes-335371                               | NoKubernetes-335371       | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | --no-kubernetes                                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-335371                               | NoKubernetes-335371       | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC | 25 Apr 24 19:44 UTC |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-783271                          | force-systemd-env-783271  | jenkins | v1.33.0 | 25 Apr 24 19:43 UTC | 25 Apr 24 19:43 UTC |
	| start   | -p cert-expiration-571974                            | cert-expiration-571974    | jenkins | v1.33.0 | 25 Apr 24 19:43 UTC | 25 Apr 24 19:44 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-762664                                      | pause-762664              | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC | 25 Apr 24 19:45 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-335371                               | NoKubernetes-335371       | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC | 25 Apr 24 19:44 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p offline-crio-744375                               | offline-crio-744375       | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC | 25 Apr 24 19:44 UTC |
	| start   | -p force-systemd-flag-543895                         | force-systemd-flag-543895 | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-335371                               | NoKubernetes-335371       | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC | 25 Apr 24 19:44 UTC |
	| start   | -p NoKubernetes-335371                               | NoKubernetes-335371       | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:44:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:44:58.188153   53486 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:44:58.188254   53486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:44:58.188258   53486 out.go:304] Setting ErrFile to fd 2...
	I0425 19:44:58.188261   53486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:44:58.188457   53486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:44:58.189032   53486 out.go:298] Setting JSON to false
	I0425 19:44:58.189962   53486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5244,"bootTime":1714069054,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:44:58.190027   53486 start.go:139] virtualization: kvm guest
	I0425 19:44:58.193360   53486 out.go:177] * [NoKubernetes-335371] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:44:58.195000   53486 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:44:58.194962   53486 notify.go:220] Checking for updates...
	I0425 19:44:58.196389   53486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:44:58.197733   53486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:44:58.198964   53486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:44:58.200214   53486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:44:58.201617   53486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:44:58.203289   53486 config.go:182] Loaded profile config "cert-expiration-571974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:44:58.203369   53486 config.go:182] Loaded profile config "force-systemd-flag-543895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:44:58.203490   53486 config.go:182] Loaded profile config "pause-762664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:44:58.203504   53486 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0425 19:44:58.203569   53486 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:44:58.239363   53486 out.go:177] * Using the kvm2 driver based on user configuration
	I0425 19:44:58.240615   53486 start.go:297] selected driver: kvm2
	I0425 19:44:58.240621   53486 start.go:901] validating driver "kvm2" against <nil>
	I0425 19:44:58.240630   53486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:44:58.240906   53486 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0425 19:44:58.240970   53486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:44:58.241030   53486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:44:58.256764   53486 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:44:58.256827   53486 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 19:44:58.257485   53486 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0425 19:44:58.257660   53486 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0425 19:44:58.257729   53486 cni.go:84] Creating CNI manager for ""
	I0425 19:44:58.257740   53486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:44:58.257748   53486 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 19:44:58.257775   53486 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0425 19:44:58.257827   53486 start.go:340] cluster config:
	{Name:NoKubernetes-335371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-335371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:44:58.257954   53486 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:44:58.259939   53486 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-335371
	I0425 19:44:54.197838   53123 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0425 19:44:54.198045   53123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:44:54.198091   53123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:44:54.219374   53123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0425 19:44:54.219888   53123 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:44:54.220481   53123 main.go:141] libmachine: Using API Version  1
	I0425 19:44:54.220528   53123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:44:54.220853   53123 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:44:54.221012   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetMachineName
	I0425 19:44:54.221154   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:44:54.221262   53123 start.go:159] libmachine.API.Create for "force-systemd-flag-543895" (driver="kvm2")
	I0425 19:44:54.221286   53123 client.go:168] LocalClient.Create starting
	I0425 19:44:54.221317   53123 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 19:44:54.221357   53123 main.go:141] libmachine: Decoding PEM data...
	I0425 19:44:54.221379   53123 main.go:141] libmachine: Parsing certificate...
	I0425 19:44:54.221438   53123 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 19:44:54.221460   53123 main.go:141] libmachine: Decoding PEM data...
	I0425 19:44:54.221476   53123 main.go:141] libmachine: Parsing certificate...
	I0425 19:44:54.221498   53123 main.go:141] libmachine: Running pre-create checks...
	I0425 19:44:54.221519   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .PreCreateCheck
	I0425 19:44:54.221906   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetConfigRaw
	I0425 19:44:54.222286   53123 main.go:141] libmachine: Creating machine...
	I0425 19:44:54.222302   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .Create
	I0425 19:44:54.222435   53123 main.go:141] libmachine: (force-systemd-flag-543895) Creating KVM machine...
	I0425 19:44:54.223582   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found existing default KVM network
	I0425 19:44:54.224734   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:54.224604   53269 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:82:ea} reservation:<nil>}
	I0425 19:44:54.225704   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:54.225631   53269 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00035a0b0}
	I0425 19:44:54.225729   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | created network xml: 
	I0425 19:44:54.225738   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | <network>
	I0425 19:44:54.225750   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   <name>mk-force-systemd-flag-543895</name>
	I0425 19:44:54.225759   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   <dns enable='no'/>
	I0425 19:44:54.225771   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   
	I0425 19:44:54.225790   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0425 19:44:54.225812   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |     <dhcp>
	I0425 19:44:54.225822   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0425 19:44:54.225835   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |     </dhcp>
	I0425 19:44:54.225845   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   </ip>
	I0425 19:44:54.225853   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   
	I0425 19:44:54.225861   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | </network>
	I0425 19:44:54.225873   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | 
	I0425 19:44:54.231276   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | trying to create private KVM network mk-force-systemd-flag-543895 192.168.50.0/24...
	I0425 19:44:54.314460   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | private KVM network mk-force-systemd-flag-543895 192.168.50.0/24 created
	I0425 19:44:54.314489   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895 ...
	I0425 19:44:54.314503   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:54.314441   53269 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:44:54.314537   53123 main.go:141] libmachine: (force-systemd-flag-543895) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 19:44:54.314553   53123 main.go:141] libmachine: (force-systemd-flag-543895) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 19:44:54.553491   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:54.553327   53269 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa...
	I0425 19:44:55.009655   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:55.009493   53269 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/force-systemd-flag-543895.rawdisk...
	I0425 19:44:55.009685   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Writing magic tar header
	I0425 19:44:55.009713   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Writing SSH key tar header
	I0425 19:44:55.009727   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:55.009610   53269 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895 ...
	I0425 19:44:55.009743   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895
	I0425 19:44:55.009801   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895 (perms=drwx------)
	I0425 19:44:55.009831   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 19:44:55.009847   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 19:44:55.009866   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:44:55.009884   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 19:44:55.009894   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 19:44:55.009905   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 19:44:55.009914   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins
	I0425 19:44:55.009924   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home
	I0425 19:44:55.009931   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Skipping /home - not owner
	I0425 19:44:55.009945   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 19:44:55.009963   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 19:44:55.009975   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 19:44:55.009983   53123 main.go:141] libmachine: (force-systemd-flag-543895) Creating domain...
	I0425 19:44:55.011339   53123 main.go:141] libmachine: (force-systemd-flag-543895) define libvirt domain using xml: 
	I0425 19:44:55.011365   53123 main.go:141] libmachine: (force-systemd-flag-543895) <domain type='kvm'>
	I0425 19:44:55.011377   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <name>force-systemd-flag-543895</name>
	I0425 19:44:55.011389   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <memory unit='MiB'>2048</memory>
	I0425 19:44:55.011400   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <vcpu>2</vcpu>
	I0425 19:44:55.011406   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <features>
	I0425 19:44:55.011420   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <acpi/>
	I0425 19:44:55.011426   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <apic/>
	I0425 19:44:55.011434   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <pae/>
	I0425 19:44:55.011447   53123 main.go:141] libmachine: (force-systemd-flag-543895)     
	I0425 19:44:55.011459   53123 main.go:141] libmachine: (force-systemd-flag-543895)   </features>
	I0425 19:44:55.011470   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <cpu mode='host-passthrough'>
	I0425 19:44:55.011477   53123 main.go:141] libmachine: (force-systemd-flag-543895)   
	I0425 19:44:55.011494   53123 main.go:141] libmachine: (force-systemd-flag-543895)   </cpu>
	I0425 19:44:55.011506   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <os>
	I0425 19:44:55.011516   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <type>hvm</type>
	I0425 19:44:55.011541   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <boot dev='cdrom'/>
	I0425 19:44:55.011552   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <boot dev='hd'/>
	I0425 19:44:55.011564   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <bootmenu enable='no'/>
	I0425 19:44:55.011572   53123 main.go:141] libmachine: (force-systemd-flag-543895)   </os>
	I0425 19:44:55.011584   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <devices>
	I0425 19:44:55.011595   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <disk type='file' device='cdrom'>
	I0425 19:44:55.011609   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/boot2docker.iso'/>
	I0425 19:44:55.011621   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <target dev='hdc' bus='scsi'/>
	I0425 19:44:55.011630   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <readonly/>
	I0425 19:44:55.011642   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </disk>
	I0425 19:44:55.011651   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <disk type='file' device='disk'>
	I0425 19:44:55.011660   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 19:44:55.011685   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/force-systemd-flag-543895.rawdisk'/>
	I0425 19:44:55.011699   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <target dev='hda' bus='virtio'/>
	I0425 19:44:55.011710   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </disk>
	I0425 19:44:55.011718   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <interface type='network'>
	I0425 19:44:55.011730   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <source network='mk-force-systemd-flag-543895'/>
	I0425 19:44:55.011741   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <model type='virtio'/>
	I0425 19:44:55.011756   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </interface>
	I0425 19:44:55.011774   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <interface type='network'>
	I0425 19:44:55.011783   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <source network='default'/>
	I0425 19:44:55.011796   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <model type='virtio'/>
	I0425 19:44:55.011804   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </interface>
	I0425 19:44:55.011817   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <serial type='pty'>
	I0425 19:44:55.011826   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <target port='0'/>
	I0425 19:44:55.011836   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </serial>
	I0425 19:44:55.011844   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <console type='pty'>
	I0425 19:44:55.011854   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <target type='serial' port='0'/>
	I0425 19:44:55.011864   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </console>
	I0425 19:44:55.011873   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <rng model='virtio'>
	I0425 19:44:55.011885   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <backend model='random'>/dev/random</backend>
	I0425 19:44:55.011894   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </rng>
	I0425 19:44:55.011903   53123 main.go:141] libmachine: (force-systemd-flag-543895)     
	I0425 19:44:55.011917   53123 main.go:141] libmachine: (force-systemd-flag-543895)     
	I0425 19:44:55.011928   53123 main.go:141] libmachine: (force-systemd-flag-543895)   </devices>
	I0425 19:44:55.011937   53123 main.go:141] libmachine: (force-systemd-flag-543895) </domain>
	I0425 19:44:55.011950   53123 main.go:141] libmachine: (force-systemd-flag-543895) 
	I0425 19:44:55.016713   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:a7:1e:8f in network default
	I0425 19:44:55.017482   53123 main.go:141] libmachine: (force-systemd-flag-543895) Ensuring networks are active...
	I0425 19:44:55.017507   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:55.018334   53123 main.go:141] libmachine: (force-systemd-flag-543895) Ensuring network default is active
	I0425 19:44:55.018792   53123 main.go:141] libmachine: (force-systemd-flag-543895) Ensuring network mk-force-systemd-flag-543895 is active
	I0425 19:44:55.019503   53123 main.go:141] libmachine: (force-systemd-flag-543895) Getting domain xml...
	I0425 19:44:55.020438   53123 main.go:141] libmachine: (force-systemd-flag-543895) Creating domain...
	I0425 19:44:56.331770   53123 main.go:141] libmachine: (force-systemd-flag-543895) Waiting to get IP...
	I0425 19:44:56.332562   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:56.333024   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:56.333069   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:56.333012   53269 retry.go:31] will retry after 255.936503ms: waiting for machine to come up
	I0425 19:44:56.590529   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:56.591052   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:56.591081   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:56.590995   53269 retry.go:31] will retry after 336.470709ms: waiting for machine to come up
	I0425 19:44:56.929686   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:56.930251   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:56.930278   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:56.930193   53269 retry.go:31] will retry after 450.038265ms: waiting for machine to come up
	I0425 19:44:57.381527   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:57.404567   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:57.404603   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:57.404484   53269 retry.go:31] will retry after 605.49286ms: waiting for machine to come up
	I0425 19:44:58.011206   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:58.011713   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:58.011742   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:58.011665   53269 retry.go:31] will retry after 497.146273ms: waiting for machine to come up
	I0425 19:44:58.261590   53486 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0425 19:44:58.377531   53486 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0425 19:44:58.377666   53486 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/NoKubernetes-335371/config.json ...
	I0425 19:44:58.377703   53486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/NoKubernetes-335371/config.json: {Name:mk6254d0d533222ac67230aff9d54ab2c7ed994f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:44:58.377862   53486 start.go:360] acquireMachinesLock for NoKubernetes-335371: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:44:58.510228   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:58.510704   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:58.510736   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:58.510667   53269 retry.go:31] will retry after 642.287101ms: waiting for machine to come up
	I0425 19:44:59.154439   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:59.155150   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:59.155177   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:59.155076   53269 retry.go:31] will retry after 1.15090394s: waiting for machine to come up
	I0425 19:45:00.307857   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:00.308224   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:00.308253   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:00.308182   53269 retry.go:31] will retry after 1.418985934s: waiting for machine to come up
	I0425 19:45:01.728805   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:01.729255   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:01.729284   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:01.729214   53269 retry.go:31] will retry after 1.793205224s: waiting for machine to come up
	I0425 19:45:01.703523   52810 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e 537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0 bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24 e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4 18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b 531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab c4066d8d6033ed321eb01d5a08d8b9a6c32eee002a442d7a0b7fad50a5aae689 28e68767d7b7898448d0882481acf39693721439e1aa0dcfd4f5447af85516ad: (10.521847877s)
	W0425 19:45:01.703597   52810 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e 537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0 bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24 e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4 18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b 531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab c4066d8d6033ed321eb01d5a08d8b9a6c32eee002a442d7a0b7fad50a5aae689 28e68767d7b7898448d0882481acf39693721439e1aa0dcfd4f5447af85516ad: Process exited with status 1
	stdout:
	15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c
	ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e
	537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb
	ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0
	bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24
	e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4
	
	stderr:
	E0425 19:45:01.695742    2907 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b\": container with ID starting with 18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b not found: ID does not exist" containerID="18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b"
	time="2024-04-25T19:45:01Z" level=fatal msg="stopping the container \"18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b\": rpc error: code = NotFound desc = could not find container \"18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b\": container with ID starting with 18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b not found: ID does not exist"
	I0425 19:45:01.703678   52810 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 19:45:01.753450   52810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 19:45:01.767134   52810 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Apr 25 19:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Apr 25 19:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Apr 25 19:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Apr 25 19:43 /etc/kubernetes/scheduler.conf
	
	I0425 19:45:01.767201   52810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 19:45:01.778515   52810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 19:45:01.789099   52810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 19:45:01.799679   52810 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0425 19:45:01.799733   52810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 19:45:01.810618   52810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 19:45:01.821081   52810 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0425 19:45:01.821141   52810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 19:45:01.832444   52810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 19:45:01.843520   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:01.912718   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:03.234848   52810 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.322094337s)
	I0425 19:45:03.234890   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:03.492106   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:03.582586   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:03.704760   52810 api_server.go:52] waiting for apiserver process to appear ...
	I0425 19:45:03.704821   52810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 19:45:04.205060   52810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 19:45:04.705093   52810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 19:45:04.723793   52810 api_server.go:72] duration metric: took 1.019031032s to wait for apiserver process to appear ...
	I0425 19:45:04.723823   52810 api_server.go:88] waiting for apiserver healthz status ...
	I0425 19:45:04.723845   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:03.524661   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:03.525142   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:03.525170   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:03.525088   53269 retry.go:31] will retry after 1.80199974s: waiting for machine to come up
	I0425 19:45:05.328636   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:05.329127   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:05.329199   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:05.329119   53269 retry.go:31] will retry after 2.421701866s: waiting for machine to come up
	I0425 19:45:07.753032   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:07.753519   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:07.753552   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:07.753459   53269 retry.go:31] will retry after 3.092699852s: waiting for machine to come up
	I0425 19:45:07.292517   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 19:45:07.292547   52810 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 19:45:07.292580   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:07.336902   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 19:45:07.336947   52810 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 19:45:07.724465   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:07.731221   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 19:45:07.731249   52810 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 19:45:08.224602   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:08.229337   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 19:45:08.229364   52810 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 19:45:08.723911   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:08.729810   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I0425 19:45:08.738361   52810 api_server.go:141] control plane version: v1.30.0
	I0425 19:45:08.738391   52810 api_server.go:131] duration metric: took 4.014560379s to wait for apiserver health ...
	I0425 19:45:08.738402   52810 cni.go:84] Creating CNI manager for ""
	I0425 19:45:08.738409   52810 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:45:08.740103   52810 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 19:45:08.741447   52810 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 19:45:08.762024   52810 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 19:45:08.795487   52810 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 19:45:08.808763   52810 system_pods.go:59] 7 kube-system pods found
	I0425 19:45:08.808794   52810 system_pods.go:61] "coredns-7db6d8ff4d-g4zcp" [d9d92885-9821-488c-bb93-a4a35d60fb1a] Running
	I0425 19:45:08.808809   52810 system_pods.go:61] "coredns-7db6d8ff4d-x667t" [e764791e-c170-49f4-b844-668b59f31072] Running
	I0425 19:45:08.808844   52810 system_pods.go:61] "etcd-pause-762664" [7f83a16c-07d2-4c41-b029-9e022a962f8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 19:45:08.808858   52810 system_pods.go:61] "kube-apiserver-pause-762664" [8b442b86-8626-4b72-8583-36c3e2617faa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 19:45:08.808878   52810 system_pods.go:61] "kube-controller-manager-pause-762664" [0d731a16-9799-4916-8ce7-10b8b38657a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 19:45:08.808889   52810 system_pods.go:61] "kube-proxy-j2lhr" [3bb81443-7890-4887-9031-5a05eba9d67d] Running
	I0425 19:45:08.808908   52810 system_pods.go:61] "kube-scheduler-pause-762664" [98bb7678-6066-4fc0-ab0c-c90b36ac5339] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 19:45:08.808920   52810 system_pods.go:74] duration metric: took 13.412055ms to wait for pod list to return data ...
	I0425 19:45:08.808933   52810 node_conditions.go:102] verifying NodePressure condition ...
	I0425 19:45:08.814941   52810 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 19:45:08.814970   52810 node_conditions.go:123] node cpu capacity is 2
	I0425 19:45:08.814983   52810 node_conditions.go:105] duration metric: took 6.041316ms to run NodePressure ...
	I0425 19:45:08.815010   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:09.125891   52810 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 19:45:09.131443   52810 kubeadm.go:733] kubelet initialised
	I0425 19:45:09.131467   52810 kubeadm.go:734] duration metric: took 5.545845ms waiting for restarted kubelet to initialise ...
	I0425 19:45:09.131485   52810 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 19:45:09.140572   52810 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:09.152710   52810 pod_ready.go:92] pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:09.152732   52810 pod_ready.go:81] duration metric: took 12.135152ms for pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:09.152740   52810 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:09.158698   52810 pod_ready.go:92] pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:09.158724   52810 pod_ready.go:81] duration metric: took 5.976825ms for pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:09.158736   52810 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:10.848012   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:10.848495   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:10.848521   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:10.848412   53269 retry.go:31] will retry after 3.812029793s: waiting for machine to come up
	I0425 19:45:11.166530   52810 pod_ready.go:102] pod "etcd-pause-762664" in "kube-system" namespace has status "Ready":"False"
	I0425 19:45:13.166654   52810 pod_ready.go:102] pod "etcd-pause-762664" in "kube-system" namespace has status "Ready":"False"
	I0425 19:45:15.667160   52810 pod_ready.go:102] pod "etcd-pause-762664" in "kube-system" namespace has status "Ready":"False"
	I0425 19:45:14.662581   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:14.662992   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:14.663023   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:14.662957   53269 retry.go:31] will retry after 4.124167035s: waiting for machine to come up
	I0425 19:45:20.427389   53486 start.go:364] duration metric: took 22.049509061s to acquireMachinesLock for "NoKubernetes-335371"
	I0425 19:45:20.427426   53486 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-335371 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-335371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 19:45:20.427558   53486 start.go:125] createHost starting for "" (driver="kvm2")
	I0425 19:45:17.665162   52810 pod_ready.go:92] pod "etcd-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:17.665195   52810 pod_ready.go:81] duration metric: took 8.5064499s for pod "etcd-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:17.665211   52810 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:19.671910   52810 pod_ready.go:92] pod "kube-apiserver-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:19.671938   52810 pod_ready.go:81] duration metric: took 2.006716953s for pod "kube-apiserver-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:19.671950   52810 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:18.791284   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:18.791687   53123 main.go:141] libmachine: (force-systemd-flag-543895) Found IP for machine: 192.168.50.9
	I0425 19:45:18.791709   53123 main.go:141] libmachine: (force-systemd-flag-543895) Reserving static IP address...
	I0425 19:45:18.791723   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has current primary IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:18.792111   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find host DHCP lease matching {name: "force-systemd-flag-543895", mac: "52:54:00:b7:de:a4", ip: "192.168.50.9"} in network mk-force-systemd-flag-543895
	I0425 19:45:18.867401   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Getting to WaitForSSH function...
	I0425 19:45:18.867435   53123 main.go:141] libmachine: (force-systemd-flag-543895) Reserved static IP address: 192.168.50.9
	I0425 19:45:18.867455   53123 main.go:141] libmachine: (force-systemd-flag-543895) Waiting for SSH to be available...
	I0425 19:45:18.870009   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:18.870503   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:18.870534   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:18.870616   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Using SSH client type: external
	I0425 19:45:18.870647   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa (-rw-------)
	I0425 19:45:18.870675   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 19:45:18.870694   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | About to run SSH command:
	I0425 19:45:18.870711   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | exit 0
	I0425 19:45:19.002512   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | SSH cmd err, output: <nil>: 
	I0425 19:45:19.002783   53123 main.go:141] libmachine: (force-systemd-flag-543895) KVM machine creation complete!
	I0425 19:45:19.003056   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetConfigRaw
	I0425 19:45:19.003524   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:19.003741   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:19.003880   53123 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 19:45:19.003891   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetState
	I0425 19:45:19.005046   53123 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 19:45:19.005062   53123 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 19:45:19.005069   53123 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 19:45:19.005078   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.007782   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.008194   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.008225   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.008375   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.008535   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.008702   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.008840   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.008979   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:19.009191   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:19.009203   53123 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 19:45:19.121831   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:45:19.121855   53123 main.go:141] libmachine: Detecting the provisioner...
	I0425 19:45:19.121875   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.124641   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.125047   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.125076   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.125336   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.125571   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.125726   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.125848   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.126011   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:19.126178   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:19.126189   53123 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 19:45:19.239579   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 19:45:19.239642   53123 main.go:141] libmachine: found compatible host: buildroot
	I0425 19:45:19.239650   53123 main.go:141] libmachine: Provisioning with buildroot...
	I0425 19:45:19.239658   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetMachineName
	I0425 19:45:19.239879   53123 buildroot.go:166] provisioning hostname "force-systemd-flag-543895"
	I0425 19:45:19.239903   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetMachineName
	I0425 19:45:19.240108   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.242714   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.243080   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.243141   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.243269   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.243478   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.243629   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.243773   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.243951   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:19.244179   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:19.244196   53123 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-543895 && echo "force-systemd-flag-543895" | sudo tee /etc/hostname
	I0425 19:45:19.375446   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-543895
	
	I0425 19:45:19.375480   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.377960   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.378341   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.378375   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.378554   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.378740   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.378918   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.379061   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.379238   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:19.379438   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:19.379469   53123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-543895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-543895/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-543895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 19:45:19.503988   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:45:19.504021   53123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 19:45:19.504079   53123 buildroot.go:174] setting up certificates
	I0425 19:45:19.504095   53123 provision.go:84] configureAuth start
	I0425 19:45:19.504120   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetMachineName
	I0425 19:45:19.504401   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetIP
	I0425 19:45:19.507198   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.507555   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.507576   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.507735   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.509992   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.510344   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.510382   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.510560   53123 provision.go:143] copyHostCerts
	I0425 19:45:19.510601   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:45:19.510633   53123 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 19:45:19.510642   53123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:45:19.510702   53123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 19:45:19.510790   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:45:19.510807   53123 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 19:45:19.510813   53123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:45:19.510838   53123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 19:45:19.510893   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:45:19.510909   53123 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 19:45:19.510915   53123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:45:19.510936   53123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 19:45:19.510992   53123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-543895 san=[127.0.0.1 192.168.50.9 force-systemd-flag-543895 localhost minikube]
	I0425 19:45:19.693616   53123 provision.go:177] copyRemoteCerts
	I0425 19:45:19.693665   53123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 19:45:19.693693   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.696338   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.696707   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.696740   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.696926   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.697109   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.697286   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.697424   53123 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa Username:docker}
	I0425 19:45:19.787717   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 19:45:19.787789   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 19:45:19.815043   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 19:45:19.815114   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0425 19:45:19.843973   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 19:45:19.844045   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 19:45:19.871108   53123 provision.go:87] duration metric: took 366.993271ms to configureAuth
	I0425 19:45:19.871134   53123 buildroot.go:189] setting minikube options for container-runtime
	I0425 19:45:19.871345   53123 config.go:182] Loaded profile config "force-systemd-flag-543895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:45:19.871422   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.874074   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.874485   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.874515   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.874711   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.874940   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.875140   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.875320   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.875492   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:19.875706   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:19.875724   53123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 19:45:20.160160   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 19:45:20.160197   53123 main.go:141] libmachine: Checking connection to Docker...
	I0425 19:45:20.160209   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetURL
	I0425 19:45:20.161574   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Using libvirt version 6000000
	I0425 19:45:20.163858   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.164182   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.164219   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.164357   53123 main.go:141] libmachine: Docker is up and running!
	I0425 19:45:20.164371   53123 main.go:141] libmachine: Reticulating splines...
	I0425 19:45:20.164378   53123 client.go:171] duration metric: took 25.943084638s to LocalClient.Create
	I0425 19:45:20.164398   53123 start.go:167] duration metric: took 25.943137001s to libmachine.API.Create "force-systemd-flag-543895"
	I0425 19:45:20.164411   53123 start.go:293] postStartSetup for "force-systemd-flag-543895" (driver="kvm2")
	I0425 19:45:20.164419   53123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 19:45:20.164435   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:20.164672   53123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 19:45:20.164693   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:20.166592   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.166936   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.166968   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.167110   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:20.167312   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:20.167483   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:20.167662   53123 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa Username:docker}
	I0425 19:45:20.256673   53123 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 19:45:20.262372   53123 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 19:45:20.262397   53123 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 19:45:20.262458   53123 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 19:45:20.262536   53123 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 19:45:20.262545   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 19:45:20.262661   53123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 19:45:20.273631   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:45:20.305134   53123 start.go:296] duration metric: took 140.709387ms for postStartSetup
	I0425 19:45:20.305193   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetConfigRaw
	I0425 19:45:20.305840   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetIP
	I0425 19:45:20.308241   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.308636   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.308669   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.308892   53123 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/config.json ...
	I0425 19:45:20.309064   53123 start.go:128] duration metric: took 26.113246135s to createHost
	I0425 19:45:20.309086   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:20.311364   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.311745   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.311773   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.311908   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:20.312058   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:20.312194   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:20.312358   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:20.312511   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:20.312720   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:20.312736   53123 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 19:45:20.427213   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714074320.376678989
	
	I0425 19:45:20.427247   53123 fix.go:216] guest clock: 1714074320.376678989
	I0425 19:45:20.427262   53123 fix.go:229] Guest: 2024-04-25 19:45:20.376678989 +0000 UTC Remote: 2024-04-25 19:45:20.309074769 +0000 UTC m=+46.952429901 (delta=67.60422ms)
	I0425 19:45:20.427307   53123 fix.go:200] guest clock delta is within tolerance: 67.60422ms
	I0425 19:45:20.427314   53123 start.go:83] releasing machines lock for "force-systemd-flag-543895", held for 26.231666855s
	I0425 19:45:20.427348   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:20.427601   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetIP
	I0425 19:45:20.430778   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.431219   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.431259   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.431409   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:20.431990   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:20.432198   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:20.432271   53123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 19:45:20.432331   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:20.432487   53123 ssh_runner.go:195] Run: cat /version.json
	I0425 19:45:20.432511   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:20.435553   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.435850   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.435937   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.435964   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.436074   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:20.436231   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:20.436249   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.436271   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.436421   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:20.436469   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:20.436638   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:20.436652   53123 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa Username:docker}
	I0425 19:45:20.436771   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:20.436902   53123 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa Username:docker}
	I0425 19:45:20.554724   53123 ssh_runner.go:195] Run: systemctl --version
	I0425 19:45:20.562146   53123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 19:45:20.733216   53123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 19:45:20.740269   53123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 19:45:20.740344   53123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 19:45:20.760248   53123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 19:45:20.760273   53123 start.go:494] detecting cgroup driver to use...
	I0425 19:45:20.760287   53123 start.go:498] using "systemd" cgroup driver as enforced via flags
	I0425 19:45:20.760345   53123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 19:45:20.780769   53123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 19:45:20.797615   53123 docker.go:217] disabling cri-docker service (if available) ...
	I0425 19:45:20.797674   53123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 19:45:20.812625   53123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 19:45:20.827812   53123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 19:45:20.952544   53123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 19:45:21.117205   53123 docker.go:233] disabling docker service ...
	I0425 19:45:21.117281   53123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 19:45:21.134892   53123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 19:45:21.148591   53123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 19:45:21.300879   53123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 19:45:21.441087   53123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 19:45:21.457693   53123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 19:45:21.478801   53123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 19:45:21.478852   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.489647   53123 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0425 19:45:21.489697   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.500624   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.511293   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.523532   53123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 19:45:21.536288   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.549316   53123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.576487   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.592040   53123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 19:45:21.603959   53123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 19:45:21.604031   53123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 19:45:21.620494   53123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 19:45:21.633054   53123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:45:21.767677   53123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 19:45:21.925024   53123 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 19:45:21.925085   53123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 19:45:21.931184   53123 start.go:562] Will wait 60s for crictl version
	I0425 19:45:21.931238   53123 ssh_runner.go:195] Run: which crictl
	I0425 19:45:21.936183   53123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 19:45:21.979349   53123 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 19:45:21.979431   53123 ssh_runner.go:195] Run: crio --version
	I0425 19:45:22.014991   53123 ssh_runner.go:195] Run: crio --version
	I0425 19:45:22.051079   53123 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 19:45:20.430001   53486 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0425 19:45:20.430244   53486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:45:20.430284   53486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:45:20.449106   53486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45377
	I0425 19:45:20.449481   53486 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:45:20.450071   53486 main.go:141] libmachine: Using API Version  1
	I0425 19:45:20.450088   53486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:45:20.450481   53486 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:45:20.450667   53486 main.go:141] libmachine: (NoKubernetes-335371) Calling .GetMachineName
	I0425 19:45:20.450806   53486 main.go:141] libmachine: (NoKubernetes-335371) Calling .DriverName
	I0425 19:45:20.450946   53486 start.go:159] libmachine.API.Create for "NoKubernetes-335371" (driver="kvm2")
	I0425 19:45:20.450963   53486 client.go:168] LocalClient.Create starting
	I0425 19:45:20.450983   53486 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 19:45:20.451005   53486 main.go:141] libmachine: Decoding PEM data...
	I0425 19:45:20.451015   53486 main.go:141] libmachine: Parsing certificate...
	I0425 19:45:20.451053   53486 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 19:45:20.451066   53486 main.go:141] libmachine: Decoding PEM data...
	I0425 19:45:20.451073   53486 main.go:141] libmachine: Parsing certificate...
	I0425 19:45:20.451097   53486 main.go:141] libmachine: Running pre-create checks...
	I0425 19:45:20.451102   53486 main.go:141] libmachine: (NoKubernetes-335371) Calling .PreCreateCheck
	I0425 19:45:20.451546   53486 main.go:141] libmachine: (NoKubernetes-335371) Calling .GetConfigRaw
	I0425 19:45:20.452049   53486 main.go:141] libmachine: Creating machine...
	I0425 19:45:20.452058   53486 main.go:141] libmachine: (NoKubernetes-335371) Calling .Create
	I0425 19:45:20.452177   53486 main.go:141] libmachine: (NoKubernetes-335371) Creating KVM machine...
	I0425 19:45:20.453503   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | found existing default KVM network
	I0425 19:45:20.455198   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:20.455044   53612 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0f0}
	I0425 19:45:20.455239   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | created network xml: 
	I0425 19:45:20.455253   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | <network>
	I0425 19:45:20.455263   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   <name>mk-NoKubernetes-335371</name>
	I0425 19:45:20.455269   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   <dns enable='no'/>
	I0425 19:45:20.455277   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   
	I0425 19:45:20.455285   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0425 19:45:20.455292   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |     <dhcp>
	I0425 19:45:20.455299   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0425 19:45:20.455313   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |     </dhcp>
	I0425 19:45:20.455319   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   </ip>
	I0425 19:45:20.455326   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   
	I0425 19:45:20.455336   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | </network>
	I0425 19:45:20.455349   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | 
	I0425 19:45:20.461595   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | trying to create private KVM network mk-NoKubernetes-335371 192.168.39.0/24...
	I0425 19:45:20.535365   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | private KVM network mk-NoKubernetes-335371 192.168.39.0/24 created
	I0425 19:45:20.535404   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:20.535349   53612 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:45:20.535439   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371 ...
	I0425 19:45:20.535467   53486 main.go:141] libmachine: (NoKubernetes-335371) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 19:45:20.535485   53486 main.go:141] libmachine: (NoKubernetes-335371) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 19:45:20.791446   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:20.791313   53612 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371/id_rsa...
	I0425 19:45:20.911545   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:20.911389   53612 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371/NoKubernetes-335371.rawdisk...
	I0425 19:45:20.911565   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Writing magic tar header
	I0425 19:45:20.911581   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Writing SSH key tar header
	I0425 19:45:20.911592   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:20.911563   53612 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371 ...
	I0425 19:45:20.911739   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371
	I0425 19:45:20.911779   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 19:45:20.911793   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371 (perms=drwx------)
	I0425 19:45:20.911812   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 19:45:20.911823   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 19:45:20.911835   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 19:45:20.911845   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 19:45:20.911855   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:45:20.911865   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 19:45:20.911873   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 19:45:20.911883   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins
	I0425 19:45:20.911896   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home
	I0425 19:45:20.911907   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Skipping /home - not owner
	I0425 19:45:20.911917   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 19:45:20.911931   53486 main.go:141] libmachine: (NoKubernetes-335371) Creating domain...
	I0425 19:45:20.913403   53486 main.go:141] libmachine: (NoKubernetes-335371) define libvirt domain using xml: 
	I0425 19:45:20.913414   53486 main.go:141] libmachine: (NoKubernetes-335371) <domain type='kvm'>
	I0425 19:45:20.913422   53486 main.go:141] libmachine: (NoKubernetes-335371)   <name>NoKubernetes-335371</name>
	I0425 19:45:20.913427   53486 main.go:141] libmachine: (NoKubernetes-335371)   <memory unit='MiB'>6000</memory>
	I0425 19:45:20.913433   53486 main.go:141] libmachine: (NoKubernetes-335371)   <vcpu>2</vcpu>
	I0425 19:45:20.913437   53486 main.go:141] libmachine: (NoKubernetes-335371)   <features>
	I0425 19:45:20.913443   53486 main.go:141] libmachine: (NoKubernetes-335371)     <acpi/>
	I0425 19:45:20.913447   53486 main.go:141] libmachine: (NoKubernetes-335371)     <apic/>
	I0425 19:45:20.913453   53486 main.go:141] libmachine: (NoKubernetes-335371)     <pae/>
	I0425 19:45:20.913459   53486 main.go:141] libmachine: (NoKubernetes-335371)     
	I0425 19:45:20.913465   53486 main.go:141] libmachine: (NoKubernetes-335371)   </features>
	I0425 19:45:20.913470   53486 main.go:141] libmachine: (NoKubernetes-335371)   <cpu mode='host-passthrough'>
	I0425 19:45:20.913475   53486 main.go:141] libmachine: (NoKubernetes-335371)   
	I0425 19:45:20.913480   53486 main.go:141] libmachine: (NoKubernetes-335371)   </cpu>
	I0425 19:45:20.913486   53486 main.go:141] libmachine: (NoKubernetes-335371)   <os>
	I0425 19:45:20.913490   53486 main.go:141] libmachine: (NoKubernetes-335371)     <type>hvm</type>
	I0425 19:45:20.913496   53486 main.go:141] libmachine: (NoKubernetes-335371)     <boot dev='cdrom'/>
	I0425 19:45:20.913500   53486 main.go:141] libmachine: (NoKubernetes-335371)     <boot dev='hd'/>
	I0425 19:45:20.913507   53486 main.go:141] libmachine: (NoKubernetes-335371)     <bootmenu enable='no'/>
	I0425 19:45:20.913511   53486 main.go:141] libmachine: (NoKubernetes-335371)   </os>
	I0425 19:45:20.913517   53486 main.go:141] libmachine: (NoKubernetes-335371)   <devices>
	I0425 19:45:20.913522   53486 main.go:141] libmachine: (NoKubernetes-335371)     <disk type='file' device='cdrom'>
	I0425 19:45:20.913532   53486 main.go:141] libmachine: (NoKubernetes-335371)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371/boot2docker.iso'/>
	I0425 19:45:20.913537   53486 main.go:141] libmachine: (NoKubernetes-335371)       <target dev='hdc' bus='scsi'/>
	I0425 19:45:20.913543   53486 main.go:141] libmachine: (NoKubernetes-335371)       <readonly/>
	I0425 19:45:20.913548   53486 main.go:141] libmachine: (NoKubernetes-335371)     </disk>
	I0425 19:45:20.913556   53486 main.go:141] libmachine: (NoKubernetes-335371)     <disk type='file' device='disk'>
	I0425 19:45:20.913564   53486 main.go:141] libmachine: (NoKubernetes-335371)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 19:45:20.913574   53486 main.go:141] libmachine: (NoKubernetes-335371)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371/NoKubernetes-335371.rawdisk'/>
	I0425 19:45:20.913600   53486 main.go:141] libmachine: (NoKubernetes-335371)       <target dev='hda' bus='virtio'/>
	I0425 19:45:20.913608   53486 main.go:141] libmachine: (NoKubernetes-335371)     </disk>
	I0425 19:45:20.913613   53486 main.go:141] libmachine: (NoKubernetes-335371)     <interface type='network'>
	I0425 19:45:20.913620   53486 main.go:141] libmachine: (NoKubernetes-335371)       <source network='mk-NoKubernetes-335371'/>
	I0425 19:45:20.913628   53486 main.go:141] libmachine: (NoKubernetes-335371)       <model type='virtio'/>
	I0425 19:45:20.913634   53486 main.go:141] libmachine: (NoKubernetes-335371)     </interface>
	I0425 19:45:20.913640   53486 main.go:141] libmachine: (NoKubernetes-335371)     <interface type='network'>
	I0425 19:45:20.913647   53486 main.go:141] libmachine: (NoKubernetes-335371)       <source network='default'/>
	I0425 19:45:20.913652   53486 main.go:141] libmachine: (NoKubernetes-335371)       <model type='virtio'/>
	I0425 19:45:20.913659   53486 main.go:141] libmachine: (NoKubernetes-335371)     </interface>
	I0425 19:45:20.913665   53486 main.go:141] libmachine: (NoKubernetes-335371)     <serial type='pty'>
	I0425 19:45:20.913672   53486 main.go:141] libmachine: (NoKubernetes-335371)       <target port='0'/>
	I0425 19:45:20.913677   53486 main.go:141] libmachine: (NoKubernetes-335371)     </serial>
	I0425 19:45:20.913684   53486 main.go:141] libmachine: (NoKubernetes-335371)     <console type='pty'>
	I0425 19:45:20.913689   53486 main.go:141] libmachine: (NoKubernetes-335371)       <target type='serial' port='0'/>
	I0425 19:45:20.913695   53486 main.go:141] libmachine: (NoKubernetes-335371)     </console>
	I0425 19:45:20.913701   53486 main.go:141] libmachine: (NoKubernetes-335371)     <rng model='virtio'>
	I0425 19:45:20.913709   53486 main.go:141] libmachine: (NoKubernetes-335371)       <backend model='random'>/dev/random</backend>
	I0425 19:45:20.913714   53486 main.go:141] libmachine: (NoKubernetes-335371)     </rng>
	I0425 19:45:20.913721   53486 main.go:141] libmachine: (NoKubernetes-335371)     
	I0425 19:45:20.913725   53486 main.go:141] libmachine: (NoKubernetes-335371)     
	I0425 19:45:20.913731   53486 main.go:141] libmachine: (NoKubernetes-335371)   </devices>
	I0425 19:45:20.913735   53486 main.go:141] libmachine: (NoKubernetes-335371) </domain>
	I0425 19:45:20.913746   53486 main.go:141] libmachine: (NoKubernetes-335371) 
	I0425 19:45:20.919033   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:0f:a2:b9 in network default
	I0425 19:45:20.919815   53486 main.go:141] libmachine: (NoKubernetes-335371) Ensuring networks are active...
	I0425 19:45:20.919830   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:20.920523   53486 main.go:141] libmachine: (NoKubernetes-335371) Ensuring network default is active
	I0425 19:45:20.920790   53486 main.go:141] libmachine: (NoKubernetes-335371) Ensuring network mk-NoKubernetes-335371 is active
	I0425 19:45:20.921306   53486 main.go:141] libmachine: (NoKubernetes-335371) Getting domain xml...
	I0425 19:45:20.922038   53486 main.go:141] libmachine: (NoKubernetes-335371) Creating domain...
	I0425 19:45:22.219916   53486 main.go:141] libmachine: (NoKubernetes-335371) Waiting to get IP...
	I0425 19:45:22.220690   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:22.221193   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:22.221232   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:22.221159   53612 retry.go:31] will retry after 245.322087ms: waiting for machine to come up
	I0425 19:45:22.468728   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:22.469457   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:22.469479   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:22.469417   53612 retry.go:31] will retry after 246.156953ms: waiting for machine to come up
	I0425 19:45:22.716943   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:22.717457   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:22.717475   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:22.717420   53612 retry.go:31] will retry after 421.840693ms: waiting for machine to come up
	I0425 19:45:23.141094   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:23.141650   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:23.141673   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:23.141551   53612 retry.go:31] will retry after 466.266362ms: waiting for machine to come up
	I0425 19:45:22.052461   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetIP
	I0425 19:45:22.055668   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:22.056073   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:22.056105   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:22.056349   53123 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0425 19:45:22.062952   53123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 19:45:22.081420   53123 kubeadm.go:877] updating cluster {Name:force-systemd-flag-543895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:force-systemd-flag-543895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 19:45:22.081511   53123 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:45:22.081560   53123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:45:22.124826   53123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 19:45:22.124897   53123 ssh_runner.go:195] Run: which lz4
	I0425 19:45:22.129811   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0425 19:45:22.129906   53123 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 19:45:22.134856   53123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 19:45:22.134886   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 19:45:21.679970   52810 pod_ready.go:102] pod "kube-controller-manager-pause-762664" in "kube-system" namespace has status "Ready":"False"
	I0425 19:45:23.680917   52810 pod_ready.go:102] pod "kube-controller-manager-pause-762664" in "kube-system" namespace has status "Ready":"False"
	I0425 19:45:24.181533   52810 pod_ready.go:92] pod "kube-controller-manager-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.181564   52810 pod_ready.go:81] duration metric: took 4.509605206s for pod "kube-controller-manager-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.181580   52810 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2lhr" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.188218   52810 pod_ready.go:92] pod "kube-proxy-j2lhr" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.188252   52810 pod_ready.go:81] duration metric: took 6.655909ms for pod "kube-proxy-j2lhr" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.188267   52810 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.194046   52810 pod_ready.go:92] pod "kube-scheduler-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.194070   52810 pod_ready.go:81] duration metric: took 5.795243ms for pod "kube-scheduler-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.194080   52810 pod_ready.go:38] duration metric: took 15.062583775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 19:45:24.194100   52810 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 19:45:24.211876   52810 ops.go:34] apiserver oom_adj: -16
	I0425 19:45:24.211898   52810 kubeadm.go:591] duration metric: took 33.157499571s to restartPrimaryControlPlane
	I0425 19:45:24.211909   52810 kubeadm.go:393] duration metric: took 33.478359534s to StartCluster
	I0425 19:45:24.211929   52810 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:24.212017   52810 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:45:24.213355   52810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:24.213648   52810 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 19:45:24.215507   52810 out.go:177] * Verifying Kubernetes components...
	I0425 19:45:24.213731   52810 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 19:45:24.213901   52810 config.go:182] Loaded profile config "pause-762664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:45:24.216978   52810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:45:24.218624   52810 out.go:177] * Enabled addons: 
	I0425 19:45:24.219998   52810 addons.go:505] duration metric: took 6.278948ms for enable addons: enabled=[]
	I0425 19:45:24.450530   52810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:45:24.472557   52810 node_ready.go:35] waiting up to 6m0s for node "pause-762664" to be "Ready" ...
	I0425 19:45:24.476419   52810 node_ready.go:49] node "pause-762664" has status "Ready":"True"
	I0425 19:45:24.476442   52810 node_ready.go:38] duration metric: took 3.852908ms for node "pause-762664" to be "Ready" ...
	I0425 19:45:24.476459   52810 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 19:45:24.483828   52810 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.490861   52810 pod_ready.go:92] pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.490888   52810 pod_ready.go:81] duration metric: took 7.029776ms for pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.490899   52810 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.576371   52810 pod_ready.go:92] pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.576402   52810 pod_ready.go:81] duration metric: took 85.494399ms for pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.576417   52810 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.977391   52810 pod_ready.go:92] pod "etcd-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.977417   52810 pod_ready.go:81] duration metric: took 400.992911ms for pod "etcd-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.977429   52810 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:25.378287   52810 pod_ready.go:92] pod "kube-apiserver-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:25.378316   52810 pod_ready.go:81] duration metric: took 400.878843ms for pod "kube-apiserver-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:25.378330   52810 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:25.775693   52810 pod_ready.go:92] pod "kube-controller-manager-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:25.775725   52810 pod_ready.go:81] duration metric: took 397.385928ms for pod "kube-controller-manager-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:25.775740   52810 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2lhr" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:26.177830   52810 pod_ready.go:92] pod "kube-proxy-j2lhr" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:26.177862   52810 pod_ready.go:81] duration metric: took 402.114436ms for pod "kube-proxy-j2lhr" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:26.177876   52810 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:26.577727   52810 pod_ready.go:92] pod "kube-scheduler-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:26.577756   52810 pod_ready.go:81] duration metric: took 399.871415ms for pod "kube-scheduler-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:26.577767   52810 pod_ready.go:38] duration metric: took 2.101296019s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 19:45:26.577795   52810 api_server.go:52] waiting for apiserver process to appear ...
	I0425 19:45:26.577853   52810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 19:45:26.602819   52810 api_server.go:72] duration metric: took 2.389128704s to wait for apiserver process to appear ...
	I0425 19:45:26.602849   52810 api_server.go:88] waiting for apiserver healthz status ...
	I0425 19:45:26.602871   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:26.616642   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I0425 19:45:26.618473   52810 api_server.go:141] control plane version: v1.30.0
	I0425 19:45:26.618501   52810 api_server.go:131] duration metric: took 15.644112ms to wait for apiserver health ...
	I0425 19:45:26.618511   52810 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 19:45:26.780634   52810 system_pods.go:59] 7 kube-system pods found
	I0425 19:45:26.780666   52810 system_pods.go:61] "coredns-7db6d8ff4d-g4zcp" [d9d92885-9821-488c-bb93-a4a35d60fb1a] Running
	I0425 19:45:26.780673   52810 system_pods.go:61] "coredns-7db6d8ff4d-x667t" [e764791e-c170-49f4-b844-668b59f31072] Running
	I0425 19:45:26.780678   52810 system_pods.go:61] "etcd-pause-762664" [7f83a16c-07d2-4c41-b029-9e022a962f8b] Running
	I0425 19:45:26.780682   52810 system_pods.go:61] "kube-apiserver-pause-762664" [8b442b86-8626-4b72-8583-36c3e2617faa] Running
	I0425 19:45:26.780686   52810 system_pods.go:61] "kube-controller-manager-pause-762664" [0d731a16-9799-4916-8ce7-10b8b38657a3] Running
	I0425 19:45:26.780699   52810 system_pods.go:61] "kube-proxy-j2lhr" [3bb81443-7890-4887-9031-5a05eba9d67d] Running
	I0425 19:45:26.780704   52810 system_pods.go:61] "kube-scheduler-pause-762664" [98bb7678-6066-4fc0-ab0c-c90b36ac5339] Running
	I0425 19:45:26.780712   52810 system_pods.go:74] duration metric: took 162.193444ms to wait for pod list to return data ...
	I0425 19:45:26.780721   52810 default_sa.go:34] waiting for default service account to be created ...
	I0425 19:45:26.976828   52810 default_sa.go:45] found service account: "default"
	I0425 19:45:26.976859   52810 default_sa.go:55] duration metric: took 196.130948ms for default service account to be created ...
	I0425 19:45:26.976871   52810 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 19:45:27.180325   52810 system_pods.go:86] 7 kube-system pods found
	I0425 19:45:27.180357   52810 system_pods.go:89] "coredns-7db6d8ff4d-g4zcp" [d9d92885-9821-488c-bb93-a4a35d60fb1a] Running
	I0425 19:45:27.180363   52810 system_pods.go:89] "coredns-7db6d8ff4d-x667t" [e764791e-c170-49f4-b844-668b59f31072] Running
	I0425 19:45:27.180367   52810 system_pods.go:89] "etcd-pause-762664" [7f83a16c-07d2-4c41-b029-9e022a962f8b] Running
	I0425 19:45:27.180372   52810 system_pods.go:89] "kube-apiserver-pause-762664" [8b442b86-8626-4b72-8583-36c3e2617faa] Running
	I0425 19:45:27.180376   52810 system_pods.go:89] "kube-controller-manager-pause-762664" [0d731a16-9799-4916-8ce7-10b8b38657a3] Running
	I0425 19:45:27.180382   52810 system_pods.go:89] "kube-proxy-j2lhr" [3bb81443-7890-4887-9031-5a05eba9d67d] Running
	I0425 19:45:27.180387   52810 system_pods.go:89] "kube-scheduler-pause-762664" [98bb7678-6066-4fc0-ab0c-c90b36ac5339] Running
	I0425 19:45:27.180395   52810 system_pods.go:126] duration metric: took 203.518429ms to wait for k8s-apps to be running ...
	I0425 19:45:27.180408   52810 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 19:45:27.180457   52810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 19:45:27.203127   52810 system_svc.go:56] duration metric: took 22.709129ms WaitForService to wait for kubelet
	I0425 19:45:27.203163   52810 kubeadm.go:576] duration metric: took 2.989476253s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:45:27.203185   52810 node_conditions.go:102] verifying NodePressure condition ...
	I0425 19:45:27.377626   52810 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 19:45:27.377657   52810 node_conditions.go:123] node cpu capacity is 2
	I0425 19:45:27.377666   52810 node_conditions.go:105] duration metric: took 174.476542ms to run NodePressure ...
	I0425 19:45:27.377677   52810 start.go:240] waiting for startup goroutines ...
	I0425 19:45:27.377683   52810 start.go:245] waiting for cluster config update ...
	I0425 19:45:27.377690   52810 start.go:254] writing updated cluster config ...
	I0425 19:45:27.393126   52810 ssh_runner.go:195] Run: rm -f paused
	I0425 19:45:27.445169   52810 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 19:45:27.596871   52810 out.go:177] * Done! kubectl is now configured to use "pause-762664" cluster and "default" namespace by default
	I0425 19:45:23.609316   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:23.609879   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:23.609905   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:23.609830   53612 retry.go:31] will retry after 694.530439ms: waiting for machine to come up
	I0425 19:45:24.305621   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:24.306085   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:24.306097   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:24.306039   53612 retry.go:31] will retry after 869.825254ms: waiting for machine to come up
	I0425 19:45:25.177950   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:25.178481   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:25.178502   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:25.178427   53612 retry.go:31] will retry after 737.309374ms: waiting for machine to come up
	I0425 19:45:25.917858   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:25.918595   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:25.918611   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:25.918493   53612 retry.go:31] will retry after 1.465177218s: waiting for machine to come up
	I0425 19:45:27.385064   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:27.385546   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:27.385562   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:27.385493   53612 retry.go:31] will retry after 1.813034414s: waiting for machine to come up
	I0425 19:45:23.912327   53123 crio.go:462] duration metric: took 1.782430898s to copy over tarball
	I0425 19:45:23.912427   53123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 19:45:26.684051   53123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.771593025s)
	I0425 19:45:26.684089   53123 crio.go:469] duration metric: took 2.771727474s to extract the tarball
	I0425 19:45:26.684102   53123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 19:45:26.725694   53123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:45:26.784338   53123 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:45:26.784359   53123 cache_images.go:84] Images are preloaded, skipping loading
	I0425 19:45:26.784368   53123 kubeadm.go:928] updating node { 192.168.50.9 8443 v1.30.0 crio true true} ...
	I0425 19:45:26.784490   53123 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-543895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-543895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 19:45:26.784569   53123 ssh_runner.go:195] Run: crio config
	I0425 19:45:26.849328   53123 cni.go:84] Creating CNI manager for ""
	I0425 19:45:26.849350   53123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:45:26.849361   53123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 19:45:26.849386   53123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.9 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-543895 NodeName:force-systemd-flag-543895 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.9 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 19:45:26.849535   53123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-543895"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 19:45:26.849608   53123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 19:45:26.866688   53123 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 19:45:26.866764   53123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 19:45:26.882521   53123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0425 19:45:26.906294   53123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 19:45:26.927866   53123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0425 19:45:26.947670   53123 ssh_runner.go:195] Run: grep 192.168.50.9	control-plane.minikube.internal$ /etc/hosts
	I0425 19:45:26.952548   53123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 19:45:26.966828   53123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:45:27.108519   53123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:45:27.129593   53123 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895 for IP: 192.168.50.9
	I0425 19:45:27.129686   53123 certs.go:194] generating shared ca certs ...
	I0425 19:45:27.129720   53123 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.129932   53123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 19:45:27.130003   53123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 19:45:27.130018   53123 certs.go:256] generating profile certs ...
	I0425 19:45:27.130094   53123 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.key
	I0425 19:45:27.130113   53123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.crt with IP's: []
	I0425 19:45:27.339949   53123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.crt ...
	I0425 19:45:27.339982   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.crt: {Name:mka21ce8700d96e7e2a7baac6295d37643d39833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.340163   53123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.key ...
	I0425 19:45:27.340178   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.key: {Name:mk3da2c7077072027206d875958a4c67e4437e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.340278   53123 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key.82ee14db
	I0425 19:45:27.340295   53123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt.82ee14db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.9]
	I0425 19:45:27.540829   53123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt.82ee14db ...
	I0425 19:45:27.540860   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt.82ee14db: {Name:mk5bb7660e299cd7366c328e4da5caacef99ac61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.555043   53123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key.82ee14db ...
	I0425 19:45:27.555086   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key.82ee14db: {Name:mk49224be3d2af8bba4a61205ea3457dd2c420f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.555222   53123 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt.82ee14db -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt
	I0425 19:45:27.555310   53123 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key.82ee14db -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key
	I0425 19:45:27.555429   53123 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.key
	I0425 19:45:27.555449   53123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.crt with IP's: []
	I0425 19:45:27.878266   53123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.crt ...
	I0425 19:45:27.878308   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.crt: {Name:mk139287786559070679219ccc67a1aedf78e07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.878521   53123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.key ...
	I0425 19:45:27.878552   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.key: {Name:mk97fc4bdbe252521c88b221264384505cbf2911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.878671   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 19:45:27.878698   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 19:45:27.878718   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 19:45:27.878746   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 19:45:27.878768   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 19:45:27.878791   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 19:45:27.878813   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 19:45:27.878835   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 19:45:27.878907   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 19:45:27.878963   53123 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 19:45:27.878979   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 19:45:27.879013   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 19:45:27.879049   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 19:45:27.879085   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 19:45:27.879150   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:45:27.879204   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 19:45:27.879222   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:45:27.879239   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 19:45:27.879909   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 19:45:27.919821   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 19:45:27.966263   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 19:45:28.006119   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 19:45:28.043783   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0425 19:45:28.075042   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 19:45:28.114717   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 19:45:28.152622   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 19:45:28.192689   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 19:45:28.228647   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 19:45:28.257712   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 19:45:28.286541   53123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 19:45:28.309154   53123 ssh_runner.go:195] Run: openssl version
	I0425 19:45:28.316476   53123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 19:45:28.331554   53123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 19:45:28.338490   53123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:45:28.338545   53123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 19:45:28.347190   53123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 19:45:28.364050   53123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 19:45:28.380933   53123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:45:28.388170   53123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:45:28.388235   53123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:45:28.395054   53123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	
	
	==> CRI-O <==
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.267850196Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714074329267813789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e4203f6-fdfc-44d3-a853-1c882555d633 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.268820260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53701846-7f03-45d2-a443-9c922e94813a name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.268938697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53701846-7f03-45d2-a443-9c922e94813a name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.269426726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85f0f3551bac69e811415cfa0d495ebdc5b9f49409cae7becb932794f20a7f7e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074304165981795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52367164dab8a899fc7b1608d061d78e63e5b1be7e41d4dcf9ccb6bde2f27bf7,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074304153818475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e573cff37a1d889f0c091081889067a61cfab99fdd3b1dfd3934ef6b2e481aed,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074304130602073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f44775b3f5b755a6fc92c9d80b0ac7b2c13447e86f82b5d5b63dd1758cb6d06,PodSandboxId:f7b78dc49b4d5dd301d07d39f7c94b61ab2f5d8f12e463339263c00e580e3ace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714074295103647031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c63ed1a37e2bc42658a1e1eb2034458102ecbf8ec38cef2a8ae7a87507c37c,PodSandboxId:2b61a0ef18feb683cc3df6bf868bc06a7d34d83c838a9aef91eaaf5f4b325f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293533381925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb67105e46396566ae583a988c431d21e071f75697d23bd1ac8a3bfcb72ae03,PodSandboxId:786cf10e286c04f7911951463e9a98e2f467c9e79dab5768b91a619835e738fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293370659675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abe9ce14d8d9f3297db7653b706eb7076620562e28b6bb05be8be780daf4ca7,PodSandboxId:2a9bef34205e4a7e271253a069280737e27759acf70b88c2e56257f1b81572d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074290874165433,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074290800404366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074290766988399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074290669380161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762
664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0,PodSandboxId:2efdc1ea633beae5069e0de2197c59ca4bb48d90af87160c4ad87145cb1095c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714074253934831077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24,PodSandboxId:d9422001b9252d2fffb537fc620a587e6b06cc91e7d252a27046b3bb00716f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Sta
te:CONTAINER_EXITED,CreatedAt:1714074253929967712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4,PodSandboxId:e6ca87249707fda91783473e1c66fbcb661ae3296f85286084a2f760f577c224,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714074253111526962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab,PodSandboxId:5ec0be047ed337ea2ed0a1ace797074029d7603d0e83277f3a20c9f9aa311874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074233877722583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53701846-7f03-45d2-a443-9c922e94813a name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.336862539Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ad300c0-0037-4897-ae82-16be47df8478 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.337010516Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ad300c0-0037-4897-ae82-16be47df8478 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.338452651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33da47d7-5d6d-491e-bc01-21355bd847e0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.339192867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714074329339155065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33da47d7-5d6d-491e-bc01-21355bd847e0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.339837850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c0c10ae-a7eb-4c0b-b94a-0b78c7e7b00d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.339938227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c0c10ae-a7eb-4c0b-b94a-0b78c7e7b00d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.340388215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85f0f3551bac69e811415cfa0d495ebdc5b9f49409cae7becb932794f20a7f7e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074304165981795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52367164dab8a899fc7b1608d061d78e63e5b1be7e41d4dcf9ccb6bde2f27bf7,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074304153818475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e573cff37a1d889f0c091081889067a61cfab99fdd3b1dfd3934ef6b2e481aed,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074304130602073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f44775b3f5b755a6fc92c9d80b0ac7b2c13447e86f82b5d5b63dd1758cb6d06,PodSandboxId:f7b78dc49b4d5dd301d07d39f7c94b61ab2f5d8f12e463339263c00e580e3ace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714074295103647031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c63ed1a37e2bc42658a1e1eb2034458102ecbf8ec38cef2a8ae7a87507c37c,PodSandboxId:2b61a0ef18feb683cc3df6bf868bc06a7d34d83c838a9aef91eaaf5f4b325f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293533381925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb67105e46396566ae583a988c431d21e071f75697d23bd1ac8a3bfcb72ae03,PodSandboxId:786cf10e286c04f7911951463e9a98e2f467c9e79dab5768b91a619835e738fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293370659675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abe9ce14d8d9f3297db7653b706eb7076620562e28b6bb05be8be780daf4ca7,PodSandboxId:2a9bef34205e4a7e271253a069280737e27759acf70b88c2e56257f1b81572d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074290874165433,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074290800404366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074290766988399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074290669380161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762
664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0,PodSandboxId:2efdc1ea633beae5069e0de2197c59ca4bb48d90af87160c4ad87145cb1095c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714074253934831077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24,PodSandboxId:d9422001b9252d2fffb537fc620a587e6b06cc91e7d252a27046b3bb00716f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Sta
te:CONTAINER_EXITED,CreatedAt:1714074253929967712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4,PodSandboxId:e6ca87249707fda91783473e1c66fbcb661ae3296f85286084a2f760f577c224,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714074253111526962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab,PodSandboxId:5ec0be047ed337ea2ed0a1ace797074029d7603d0e83277f3a20c9f9aa311874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074233877722583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c0c10ae-a7eb-4c0b-b94a-0b78c7e7b00d name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.404837791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4d3d726-595b-4f63-a5e7-a86eebeaa362 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.404972339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4d3d726-595b-4f63-a5e7-a86eebeaa362 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.407980470Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ea2df33-2e37-435c-b0e4-d413b071c99e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.408619926Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714074329408583707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ea2df33-2e37-435c-b0e4-d413b071c99e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.409480137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd8e64a2-65da-4a6d-8197-88a26308c39b name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.409591052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd8e64a2-65da-4a6d-8197-88a26308c39b name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.410009916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85f0f3551bac69e811415cfa0d495ebdc5b9f49409cae7becb932794f20a7f7e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074304165981795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52367164dab8a899fc7b1608d061d78e63e5b1be7e41d4dcf9ccb6bde2f27bf7,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074304153818475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e573cff37a1d889f0c091081889067a61cfab99fdd3b1dfd3934ef6b2e481aed,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074304130602073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f44775b3f5b755a6fc92c9d80b0ac7b2c13447e86f82b5d5b63dd1758cb6d06,PodSandboxId:f7b78dc49b4d5dd301d07d39f7c94b61ab2f5d8f12e463339263c00e580e3ace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714074295103647031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c63ed1a37e2bc42658a1e1eb2034458102ecbf8ec38cef2a8ae7a87507c37c,PodSandboxId:2b61a0ef18feb683cc3df6bf868bc06a7d34d83c838a9aef91eaaf5f4b325f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293533381925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb67105e46396566ae583a988c431d21e071f75697d23bd1ac8a3bfcb72ae03,PodSandboxId:786cf10e286c04f7911951463e9a98e2f467c9e79dab5768b91a619835e738fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293370659675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abe9ce14d8d9f3297db7653b706eb7076620562e28b6bb05be8be780daf4ca7,PodSandboxId:2a9bef34205e4a7e271253a069280737e27759acf70b88c2e56257f1b81572d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074290874165433,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074290800404366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074290766988399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074290669380161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762
664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0,PodSandboxId:2efdc1ea633beae5069e0de2197c59ca4bb48d90af87160c4ad87145cb1095c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714074253934831077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24,PodSandboxId:d9422001b9252d2fffb537fc620a587e6b06cc91e7d252a27046b3bb00716f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Sta
te:CONTAINER_EXITED,CreatedAt:1714074253929967712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4,PodSandboxId:e6ca87249707fda91783473e1c66fbcb661ae3296f85286084a2f760f577c224,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714074253111526962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab,PodSandboxId:5ec0be047ed337ea2ed0a1ace797074029d7603d0e83277f3a20c9f9aa311874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074233877722583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd8e64a2-65da-4a6d-8197-88a26308c39b name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.462766515Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe2421e7-7180-4ec8-af02-ad9553e99563 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.462868173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe2421e7-7180-4ec8-af02-ad9553e99563 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.464313295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=658d3641-30cc-44cf-8466-79cb185a8285 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.464911553Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714074329464883809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=658d3641-30cc-44cf-8466-79cb185a8285 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.465491261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4639c91-071b-4b2e-9a72-af754c8559ab name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.465605815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4639c91-071b-4b2e-9a72-af754c8559ab name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:29 pause-762664 crio[2496]: time="2024-04-25 19:45:29.465957298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85f0f3551bac69e811415cfa0d495ebdc5b9f49409cae7becb932794f20a7f7e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074304165981795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52367164dab8a899fc7b1608d061d78e63e5b1be7e41d4dcf9ccb6bde2f27bf7,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074304153818475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e573cff37a1d889f0c091081889067a61cfab99fdd3b1dfd3934ef6b2e481aed,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074304130602073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f44775b3f5b755a6fc92c9d80b0ac7b2c13447e86f82b5d5b63dd1758cb6d06,PodSandboxId:f7b78dc49b4d5dd301d07d39f7c94b61ab2f5d8f12e463339263c00e580e3ace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714074295103647031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c63ed1a37e2bc42658a1e1eb2034458102ecbf8ec38cef2a8ae7a87507c37c,PodSandboxId:2b61a0ef18feb683cc3df6bf868bc06a7d34d83c838a9aef91eaaf5f4b325f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293533381925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb67105e46396566ae583a988c431d21e071f75697d23bd1ac8a3bfcb72ae03,PodSandboxId:786cf10e286c04f7911951463e9a98e2f467c9e79dab5768b91a619835e738fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293370659675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abe9ce14d8d9f3297db7653b706eb7076620562e28b6bb05be8be780daf4ca7,PodSandboxId:2a9bef34205e4a7e271253a069280737e27759acf70b88c2e56257f1b81572d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074290874165433,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074290800404366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074290766988399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074290669380161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762
664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0,PodSandboxId:2efdc1ea633beae5069e0de2197c59ca4bb48d90af87160c4ad87145cb1095c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714074253934831077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24,PodSandboxId:d9422001b9252d2fffb537fc620a587e6b06cc91e7d252a27046b3bb00716f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Sta
te:CONTAINER_EXITED,CreatedAt:1714074253929967712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4,PodSandboxId:e6ca87249707fda91783473e1c66fbcb661ae3296f85286084a2f760f577c224,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714074253111526962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab,PodSandboxId:5ec0be047ed337ea2ed0a1ace797074029d7603d0e83277f3a20c9f9aa311874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074233877722583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4639c91-071b-4b2e-9a72-af754c8559ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	85f0f3551bac6       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   25 seconds ago       Running             kube-controller-manager   2                   08129017aad0c       kube-controller-manager-pause-762664
	52367164dab8a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   25 seconds ago       Running             kube-apiserver            2                   6fb24dbb33709       kube-apiserver-pause-762664
	e573cff37a1d8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago       Running             etcd                      2                   1c6d0a639b680       etcd-pause-762664
	8f44775b3f5b7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   34 seconds ago       Running             kube-proxy                1                   f7b78dc49b4d5       kube-proxy-j2lhr
	c4c63ed1a37e2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago       Running             coredns                   1                   2b61a0ef18feb       coredns-7db6d8ff4d-x667t
	fbb67105e4639       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago       Running             coredns                   1                   786cf10e286c0       coredns-7db6d8ff4d-g4zcp
	1abe9ce14d8d9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   38 seconds ago       Running             kube-scheduler            1                   2a9bef34205e4       kube-scheduler-pause-762664
	15ab66a925226       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   38 seconds ago       Exited              etcd                      1                   1c6d0a639b680       etcd-pause-762664
	ea8fe2b8ac695       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   38 seconds ago       Exited              kube-controller-manager   1                   08129017aad0c       kube-controller-manager-pause-762664
	537c5ceb06ae4       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   38 seconds ago       Exited              kube-apiserver            1                   6fb24dbb33709       kube-apiserver-pause-762664
	ed4edf4113dee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   2efdc1ea633be       coredns-7db6d8ff4d-g4zcp
	bcd6bfb37758f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   d9422001b9252       coredns-7db6d8ff4d-x667t
	e6076e0ade5f4       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   About a minute ago   Exited              kube-proxy                0                   e6ca87249707f       kube-proxy-j2lhr
	531f413370c15       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   About a minute ago   Exited              kube-scheduler            0                   5ec0be047ed33       kube-scheduler-pause-762664
	
	
	==> coredns [bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48810 - 1009 "HINFO IN 2689421302928699323.7076540480638446432. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025733554s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c4c63ed1a37e2bc42658a1e1eb2034458102ecbf8ec38cef2a8ae7a87507c37c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44868 - 44166 "HINFO IN 8201741368956416230.8992973879895194046. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01948517s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35336->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35336->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35352->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35352->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35340->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35340->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42639 - 42509 "HINFO IN 7364694214519880523.2155443123500162415. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026832263s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fbb67105e46396566ae583a988c431d21e071f75697d23bd1ac8a3bfcb72ae03] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49786 - 21332 "HINFO IN 6916482763460460802.2067125449859031103. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056799969s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42460->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42460->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42448->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42448->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42464->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42464->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-762664
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-762664
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=pause-762664
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T19_43_59_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:43:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-762664
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:45:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:45:07 +0000   Thu, 25 Apr 2024 19:43:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:45:07 +0000   Thu, 25 Apr 2024 19:43:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:45:07 +0000   Thu, 25 Apr 2024 19:43:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:45:07 +0000   Thu, 25 Apr 2024 19:43:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.146
	  Hostname:    pause-762664
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 880ea3a0a3354ddcb3726e14da9330f0
	  System UUID:                880ea3a0-a335-4ddc-b372-6e14da9330f0
	  Boot ID:                    15a82cff-b5eb-4c35-9e06-91b786620d34
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-g4zcp                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     77s
	  kube-system                 coredns-7db6d8ff4d-x667t                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     77s
	  kube-system                 etcd-pause-762664                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kube-apiserver-pause-762664             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-controller-manager-pause-762664    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-proxy-j2lhr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-scheduler-pause-762664             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (12%!)(MISSING)  340Mi (17%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node pause-762664 status is now: NodeHasSufficientMemory
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeReady                90s                kubelet          Node pause-762664 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node pause-762664 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node pause-762664 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s                kubelet          Node pause-762664 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           78s                node-controller  Node pause-762664 event: Registered Node pause-762664 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-762664 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-762664 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-762664 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-762664 event: Registered Node pause-762664 in Controller
	
	
	==> dmesg <==
	[  +0.067929] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.226727] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.158215] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.344638] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +5.142204] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +0.066531] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.166259] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.063175] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.020634] systemd-fstab-generator[1276]: Ignoring "noauto" option for root device
	[  +0.086689] kauditd_printk_skb: 69 callbacks suppressed
	[Apr25 19:44] systemd-fstab-generator[1494]: Ignoring "noauto" option for root device
	[  +0.167812] kauditd_printk_skb: 21 callbacks suppressed
	[ +29.990014] systemd-fstab-generator[2347]: Ignoring "noauto" option for root device
	[  +0.111217] kauditd_printk_skb: 90 callbacks suppressed
	[  +0.079396] systemd-fstab-generator[2359]: Ignoring "noauto" option for root device
	[  +0.217475] systemd-fstab-generator[2374]: Ignoring "noauto" option for root device
	[  +0.167019] systemd-fstab-generator[2386]: Ignoring "noauto" option for root device
	[  +0.454921] systemd-fstab-generator[2433]: Ignoring "noauto" option for root device
	[  +6.450715] systemd-fstab-generator[2580]: Ignoring "noauto" option for root device
	[  +0.074539] kauditd_printk_skb: 112 callbacks suppressed
	[  +5.109286] kauditd_printk_skb: 88 callbacks suppressed
	[Apr25 19:45] systemd-fstab-generator[3424]: Ignoring "noauto" option for root device
	[  +0.095983] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.437882] kauditd_printk_skb: 31 callbacks suppressed
	[  +3.390537] systemd-fstab-generator[3716]: Ignoring "noauto" option for root device
	
	
	==> etcd [15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c] <==
	
	
	==> etcd [e573cff37a1d889f0c091081889067a61cfab99fdd3b1dfd3934ef6b2e481aed] <==
	{"level":"info","ts":"2024-04-25T19:45:04.498198Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a63b81a8045c22a0","local-member-id":"52a637c8f882c7df","added-peer-id":"52a637c8f882c7df","added-peer-peer-urls":["https://192.168.61.146:2380"]}
	{"level":"info","ts":"2024-04-25T19:45:04.498354Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a63b81a8045c22a0","local-member-id":"52a637c8f882c7df","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:45:04.498421Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:45:04.509757Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-25T19:45:04.510203Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.146:2380"}
	{"level":"info","ts":"2024-04-25T19:45:04.512299Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.146:2380"}
	{"level":"info","ts":"2024-04-25T19:45:04.516328Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-25T19:45:04.516258Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"52a637c8f882c7df","initial-advertise-peer-urls":["https://192.168.61.146:2380"],"listen-peer-urls":["https://192.168.61.146:2380"],"advertise-client-urls":["https://192.168.61.146:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.146:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-25T19:45:05.559331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-25T19:45:05.559466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-25T19:45:05.559555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df received MsgPreVoteResp from 52a637c8f882c7df at term 2"}
	{"level":"info","ts":"2024-04-25T19:45:05.559605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df became candidate at term 3"}
	{"level":"info","ts":"2024-04-25T19:45:05.559631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df received MsgVoteResp from 52a637c8f882c7df at term 3"}
	{"level":"info","ts":"2024-04-25T19:45:05.559657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df became leader at term 3"}
	{"level":"info","ts":"2024-04-25T19:45:05.559682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 52a637c8f882c7df elected leader 52a637c8f882c7df at term 3"}
	{"level":"info","ts":"2024-04-25T19:45:05.564563Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"52a637c8f882c7df","local-member-attributes":"{Name:pause-762664 ClientURLs:[https://192.168.61.146:2379]}","request-path":"/0/members/52a637c8f882c7df/attributes","cluster-id":"a63b81a8045c22a0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T19:45:05.56486Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:45:05.565114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:45:05.567126Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T19:45:05.567263Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T19:45:05.568767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-25T19:45:05.5715Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.146:2379"}
	{"level":"warn","ts":"2024-04-25T19:45:28.595313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.561822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.146\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-25T19:45:28.595644Z","caller":"traceutil/trace.go:171","msg":"trace[1171848953] range","detail":"{range_begin:/registry/masterleases/192.168.61.146; range_end:; response_count:1; response_revision:448; }","duration":"173.948965ms","start":"2024-04-25T19:45:28.421675Z","end":"2024-04-25T19:45:28.595624Z","steps":["trace[1171848953] 'range keys from in-memory index tree'  (duration: 173.43892ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T19:45:28.839719Z","caller":"traceutil/trace.go:171","msg":"trace[82655130] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"168.731854ms","start":"2024-04-25T19:45:28.670968Z","end":"2024-04-25T19:45:28.839699Z","steps":["trace[82655130] 'process raft request'  (duration: 167.862018ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:45:30 up 2 min,  0 users,  load average: 1.03, 0.37, 0.13
	Linux pause-762664 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [52367164dab8a899fc7b1608d061d78e63e5b1be7e41d4dcf9ccb6bde2f27bf7] <==
	I0425 19:45:07.334831       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0425 19:45:07.335356       1 aggregator.go:165] initial CRD sync complete...
	I0425 19:45:07.335439       1 autoregister_controller.go:141] Starting autoregister controller
	I0425 19:45:07.335466       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0425 19:45:07.396534       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0425 19:45:07.396668       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0425 19:45:07.396811       1 shared_informer.go:320] Caches are synced for configmaps
	I0425 19:45:07.397358       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0425 19:45:07.397925       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0425 19:45:07.398426       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0425 19:45:07.409818       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0425 19:45:07.409875       1 policy_source.go:224] refreshing policies
	I0425 19:45:07.409934       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0425 19:45:07.411809       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0425 19:45:07.412452       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0425 19:45:07.430094       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0425 19:45:07.436871       1 cache.go:39] Caches are synced for autoregister controller
	I0425 19:45:08.181867       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0425 19:45:08.990599       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0425 19:45:09.016577       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0425 19:45:09.068637       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0425 19:45:09.097806       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0425 19:45:09.108426       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0425 19:45:20.830655       1 controller.go:615] quota admission added evaluator for: endpoints
	I0425 19:45:20.882894       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb] <==
	I0425 19:44:51.120279       1 options.go:221] external host was not specified, using 192.168.61.146
	I0425 19:44:51.123495       1 server.go:148] Version: v1.30.0
	I0425 19:44:51.123535       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0425 19:44:51.944196       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:51.944325       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0425 19:44:51.944549       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0425 19:44:51.948803       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0425 19:44:51.950304       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0425 19:44:51.950495       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0425 19:44:51.950700       1 instance.go:299] Using reconciler: lease
	W0425 19:44:51.952146       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:52.945216       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:52.945297       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:52.952608       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:54.353672       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:54.448960       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:54.663499       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:56.470975       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:56.728910       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:56.749471       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:45:00.131691       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:45:00.743031       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:45:01.584494       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [85f0f3551bac69e811415cfa0d495ebdc5b9f49409cae7becb932794f20a7f7e] <==
	I0425 19:45:20.581937       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0425 19:45:20.587712       1 shared_informer.go:320] Caches are synced for persistent volume
	I0425 19:45:20.590269       1 shared_informer.go:320] Caches are synced for endpoint
	I0425 19:45:20.593317       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0425 19:45:20.628378       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0425 19:45:20.628517       1 shared_informer.go:320] Caches are synced for GC
	I0425 19:45:20.631227       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0425 19:45:20.637341       1 shared_informer.go:320] Caches are synced for node
	I0425 19:45:20.637753       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0425 19:45:20.639408       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0425 19:45:20.639979       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0425 19:45:20.640122       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0425 19:45:20.684179       1 shared_informer.go:320] Caches are synced for taint
	I0425 19:45:20.684323       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0425 19:45:20.684413       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-762664"
	I0425 19:45:20.684459       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0425 19:45:20.727803       1 shared_informer.go:320] Caches are synced for attach detach
	I0425 19:45:20.739517       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0425 19:45:20.808423       1 shared_informer.go:320] Caches are synced for resource quota
	I0425 19:45:20.820525       1 shared_informer.go:320] Caches are synced for job
	I0425 19:45:20.827578       1 shared_informer.go:320] Caches are synced for cronjob
	I0425 19:45:20.834440       1 shared_informer.go:320] Caches are synced for resource quota
	I0425 19:45:21.218929       1 shared_informer.go:320] Caches are synced for garbage collector
	I0425 19:45:21.243611       1 shared_informer.go:320] Caches are synced for garbage collector
	I0425 19:45:21.243796       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e] <==
	
	
	==> kube-proxy [8f44775b3f5b755a6fc92c9d80b0ac7b2c13447e86f82b5d5b63dd1758cb6d06] <==
	I0425 19:44:55.284871       1 server_linux.go:69] "Using iptables proxy"
	E0425 19:45:02.640353       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-762664\": dial tcp 192.168.61.146:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.146:34644->192.168.61.146:8443: read: connection reset by peer"
	E0425 19:45:03.772011       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-762664\": dial tcp 192.168.61.146:8443: connect: connection refused"
	I0425 19:45:07.362391       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.146"]
	I0425 19:45:07.447405       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:45:07.447434       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:45:07.447449       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:45:07.454893       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:45:07.455329       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:45:07.455493       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:45:07.456793       1 config.go:192] "Starting service config controller"
	I0425 19:45:07.456881       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:45:07.456921       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:45:07.456938       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:45:07.457452       1 config.go:319] "Starting node config controller"
	I0425 19:45:07.459127       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:45:07.557348       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:45:07.557367       1 shared_informer.go:320] Caches are synced for service config
	I0425 19:45:07.559752       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4] <==
	I0425 19:44:13.665485       1 server_linux.go:69] "Using iptables proxy"
	I0425 19:44:13.890737       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.146"]
	I0425 19:44:14.084802       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:44:14.084831       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:44:14.084849       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:44:14.088405       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:44:14.088631       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:44:14.088879       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:44:14.090031       1 config.go:192] "Starting service config controller"
	I0425 19:44:14.090209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:44:14.090258       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:44:14.090276       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:44:14.090840       1 config.go:319] "Starting node config controller"
	I0425 19:44:14.090875       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:44:14.191277       1 shared_informer.go:320] Caches are synced for service config
	I0425 19:44:14.191356       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:44:14.191885       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1abe9ce14d8d9f3297db7653b706eb7076620562e28b6bb05be8be780daf4ca7] <==
	W0425 19:45:07.298261       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 19:45:07.298315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 19:45:07.298367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 19:45:07.298407       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 19:45:07.298452       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0425 19:45:07.298465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0425 19:45:07.298523       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0425 19:45:07.298575       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0425 19:45:07.298629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 19:45:07.298678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0425 19:45:07.298731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 19:45:07.298781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 19:45:07.299031       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 19:45:07.308236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 19:45:07.308372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 19:45:07.308416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 19:45:07.308473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 19:45:07.308522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 19:45:07.308586       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:45:07.308628       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0425 19:45:07.308706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0425 19:45:07.308747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0425 19:45:07.308818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 19:45:07.308867       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0425 19:45:07.451163       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab] <==
	E0425 19:43:56.468013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0425 19:43:56.468095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 19:43:56.468130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 19:43:56.468321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 19:43:56.468367       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 19:43:57.287389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:43:57.287466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0425 19:43:57.322398       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 19:43:57.322480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 19:43:57.355653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 19:43:57.355880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 19:43:57.484400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 19:43:57.484480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0425 19:43:57.579795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 19:43:57.579891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 19:43:57.607501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0425 19:43:57.607671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0425 19:43:57.641301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 19:43:57.641512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 19:43:57.644788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0425 19:43:57.644926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0425 19:43:57.878525       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 19:43:57.878603       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0425 19:44:01.066986       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0425 19:44:35.942637       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.845600    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36ae50117f119bc1f2822a38375444e0-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-762664\" (UID: \"36ae50117f119bc1f2822a38375444e0\") " pod="kube-system/kube-controller-manager-pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.845614    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d576325ea51f34aa54f82e656b7d0c4b-ca-certs\") pod \"kube-apiserver-pause-762664\" (UID: \"d576325ea51f34aa54f82e656b7d0c4b\") " pod="kube-system/kube-apiserver-pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.845658    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d576325ea51f34aa54f82e656b7d0c4b-usr-share-ca-certificates\") pod \"kube-apiserver-pause-762664\" (UID: \"d576325ea51f34aa54f82e656b7d0c4b\") " pod="kube-system/kube-apiserver-pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.845672    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36ae50117f119bc1f2822a38375444e0-k8s-certs\") pod \"kube-controller-manager-pause-762664\" (UID: \"36ae50117f119bc1f2822a38375444e0\") " pod="kube-system/kube-controller-manager-pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.845691    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36ae50117f119bc1f2822a38375444e0-kubeconfig\") pod \"kube-controller-manager-pause-762664\" (UID: \"36ae50117f119bc1f2822a38375444e0\") " pod="kube-system/kube-controller-manager-pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.947372    3431 kubelet_node_status.go:73] "Attempting to register node" node="pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: E0425 19:45:03.949600    3431 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.146:8443: connect: connection refused" node="pause-762664"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: I0425 19:45:04.114693    3431 scope.go:117] "RemoveContainer" containerID="15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: I0425 19:45:04.116722    3431 scope.go:117] "RemoveContainer" containerID="537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: I0425 19:45:04.117749    3431 scope.go:117] "RemoveContainer" containerID="ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: E0425 19:45:04.242243    3431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-762664?timeout=10s\": dial tcp 192.168.61.146:8443: connect: connection refused" interval="800ms"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: I0425 19:45:04.351874    3431 kubelet_node_status.go:73] "Attempting to register node" node="pause-762664"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: E0425 19:45:04.353272    3431 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.146:8443: connect: connection refused" node="pause-762664"
	Apr 25 19:45:05 pause-762664 kubelet[3431]: I0425 19:45:05.155639    3431 kubelet_node_status.go:73] "Attempting to register node" node="pause-762664"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.509517    3431 kubelet_node_status.go:112] "Node was previously registered" node="pause-762664"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.509663    3431 kubelet_node_status.go:76] "Successfully registered node" node="pause-762664"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.511649    3431 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.512694    3431 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.617701    3431 apiserver.go:52] "Watching apiserver"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.621224    3431 topology_manager.go:215] "Topology Admit Handler" podUID="d9d92885-9821-488c-bb93-a4a35d60fb1a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g4zcp"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.621382    3431 topology_manager.go:215] "Topology Admit Handler" podUID="e764791e-c170-49f4-b844-668b59f31072" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x667t"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.621489    3431 topology_manager.go:215] "Topology Admit Handler" podUID="3bb81443-7890-4887-9031-5a05eba9d67d" podNamespace="kube-system" podName="kube-proxy-j2lhr"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.632390    3431 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.698774    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bb81443-7890-4887-9031-5a05eba9d67d-lib-modules\") pod \"kube-proxy-j2lhr\" (UID: \"3bb81443-7890-4887-9031-5a05eba9d67d\") " pod="kube-system/kube-proxy-j2lhr"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.698924    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bb81443-7890-4887-9031-5a05eba9d67d-xtables-lock\") pod \"kube-proxy-j2lhr\" (UID: \"3bb81443-7890-4887-9031-5a05eba9d67d\") " pod="kube-system/kube-proxy-j2lhr"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-762664 -n pause-762664
helpers_test.go:261: (dbg) Run:  kubectl --context pause-762664 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-762664 -n pause-762664
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-762664 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-762664 logs -n 25: (1.994923991s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo cat                            | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo cat                            | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo cat                            | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo cat                            | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo                                | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo find                           | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-120641 sudo crio                           | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-120641                                     | cilium-120641             | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC | 25 Apr 24 19:42 UTC |
	| start   | -p NoKubernetes-335371                               | NoKubernetes-335371       | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC |                     |
	|         | --no-kubernetes                                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-335371                               | NoKubernetes-335371       | jenkins | v1.33.0 | 25 Apr 24 19:42 UTC | 25 Apr 24 19:44 UTC |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-783271                          | force-systemd-env-783271  | jenkins | v1.33.0 | 25 Apr 24 19:43 UTC | 25 Apr 24 19:43 UTC |
	| start   | -p cert-expiration-571974                            | cert-expiration-571974    | jenkins | v1.33.0 | 25 Apr 24 19:43 UTC | 25 Apr 24 19:44 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p pause-762664                                      | pause-762664              | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC | 25 Apr 24 19:45 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-335371                               | NoKubernetes-335371       | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC | 25 Apr 24 19:44 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p offline-crio-744375                               | offline-crio-744375       | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC | 25 Apr 24 19:44 UTC |
	| start   | -p force-systemd-flag-543895                         | force-systemd-flag-543895 | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-335371                               | NoKubernetes-335371       | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC | 25 Apr 24 19:44 UTC |
	| start   | -p NoKubernetes-335371                               | NoKubernetes-335371       | jenkins | v1.33.0 | 25 Apr 24 19:44 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:44:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:44:58.188153   53486 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:44:58.188254   53486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:44:58.188258   53486 out.go:304] Setting ErrFile to fd 2...
	I0425 19:44:58.188261   53486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:44:58.188457   53486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:44:58.189032   53486 out.go:298] Setting JSON to false
	I0425 19:44:58.189962   53486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5244,"bootTime":1714069054,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:44:58.190027   53486 start.go:139] virtualization: kvm guest
	I0425 19:44:58.193360   53486 out.go:177] * [NoKubernetes-335371] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:44:58.195000   53486 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:44:58.194962   53486 notify.go:220] Checking for updates...
	I0425 19:44:58.196389   53486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:44:58.197733   53486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:44:58.198964   53486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:44:58.200214   53486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:44:58.201617   53486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:44:58.203289   53486 config.go:182] Loaded profile config "cert-expiration-571974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:44:58.203369   53486 config.go:182] Loaded profile config "force-systemd-flag-543895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:44:58.203490   53486 config.go:182] Loaded profile config "pause-762664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:44:58.203504   53486 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0425 19:44:58.203569   53486 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:44:58.239363   53486 out.go:177] * Using the kvm2 driver based on user configuration
	I0425 19:44:58.240615   53486 start.go:297] selected driver: kvm2
	I0425 19:44:58.240621   53486 start.go:901] validating driver "kvm2" against <nil>
	I0425 19:44:58.240630   53486 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:44:58.240906   53486 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0425 19:44:58.240970   53486 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:44:58.241030   53486 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:44:58.256764   53486 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:44:58.256827   53486 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 19:44:58.257485   53486 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0425 19:44:58.257660   53486 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0425 19:44:58.257729   53486 cni.go:84] Creating CNI manager for ""
	I0425 19:44:58.257740   53486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:44:58.257748   53486 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 19:44:58.257775   53486 start.go:1878] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0425 19:44:58.257827   53486 start.go:340] cluster config:
	{Name:NoKubernetes-335371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-335371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:44:58.257954   53486 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:44:58.259939   53486 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-335371
	I0425 19:44:54.197838   53123 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0425 19:44:54.198045   53123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:44:54.198091   53123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:44:54.219374   53123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0425 19:44:54.219888   53123 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:44:54.220481   53123 main.go:141] libmachine: Using API Version  1
	I0425 19:44:54.220528   53123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:44:54.220853   53123 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:44:54.221012   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetMachineName
	I0425 19:44:54.221154   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:44:54.221262   53123 start.go:159] libmachine.API.Create for "force-systemd-flag-543895" (driver="kvm2")
	I0425 19:44:54.221286   53123 client.go:168] LocalClient.Create starting
	I0425 19:44:54.221317   53123 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 19:44:54.221357   53123 main.go:141] libmachine: Decoding PEM data...
	I0425 19:44:54.221379   53123 main.go:141] libmachine: Parsing certificate...
	I0425 19:44:54.221438   53123 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 19:44:54.221460   53123 main.go:141] libmachine: Decoding PEM data...
	I0425 19:44:54.221476   53123 main.go:141] libmachine: Parsing certificate...
	I0425 19:44:54.221498   53123 main.go:141] libmachine: Running pre-create checks...
	I0425 19:44:54.221519   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .PreCreateCheck
	I0425 19:44:54.221906   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetConfigRaw
	I0425 19:44:54.222286   53123 main.go:141] libmachine: Creating machine...
	I0425 19:44:54.222302   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .Create
	I0425 19:44:54.222435   53123 main.go:141] libmachine: (force-systemd-flag-543895) Creating KVM machine...
	I0425 19:44:54.223582   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found existing default KVM network
	I0425 19:44:54.224734   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:54.224604   53269 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:82:ea} reservation:<nil>}
	I0425 19:44:54.225704   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:54.225631   53269 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00035a0b0}
	I0425 19:44:54.225729   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | created network xml: 
	I0425 19:44:54.225738   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | <network>
	I0425 19:44:54.225750   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   <name>mk-force-systemd-flag-543895</name>
	I0425 19:44:54.225759   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   <dns enable='no'/>
	I0425 19:44:54.225771   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   
	I0425 19:44:54.225790   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0425 19:44:54.225812   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |     <dhcp>
	I0425 19:44:54.225822   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0425 19:44:54.225835   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |     </dhcp>
	I0425 19:44:54.225845   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   </ip>
	I0425 19:44:54.225853   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG |   
	I0425 19:44:54.225861   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | </network>
	I0425 19:44:54.225873   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | 
	I0425 19:44:54.231276   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | trying to create private KVM network mk-force-systemd-flag-543895 192.168.50.0/24...
	I0425 19:44:54.314460   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | private KVM network mk-force-systemd-flag-543895 192.168.50.0/24 created
	I0425 19:44:54.314489   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895 ...
	I0425 19:44:54.314503   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:54.314441   53269 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:44:54.314537   53123 main.go:141] libmachine: (force-systemd-flag-543895) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 19:44:54.314553   53123 main.go:141] libmachine: (force-systemd-flag-543895) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 19:44:54.553491   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:54.553327   53269 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa...
	I0425 19:44:55.009655   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:55.009493   53269 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/force-systemd-flag-543895.rawdisk...
	I0425 19:44:55.009685   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Writing magic tar header
	I0425 19:44:55.009713   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Writing SSH key tar header
	I0425 19:44:55.009727   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:55.009610   53269 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895 ...
	I0425 19:44:55.009743   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895
	I0425 19:44:55.009801   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895 (perms=drwx------)
	I0425 19:44:55.009831   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 19:44:55.009847   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 19:44:55.009866   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:44:55.009884   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 19:44:55.009894   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 19:44:55.009905   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 19:44:55.009914   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home/jenkins
	I0425 19:44:55.009924   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Checking permissions on dir: /home
	I0425 19:44:55.009931   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Skipping /home - not owner
	I0425 19:44:55.009945   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 19:44:55.009963   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 19:44:55.009975   53123 main.go:141] libmachine: (force-systemd-flag-543895) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 19:44:55.009983   53123 main.go:141] libmachine: (force-systemd-flag-543895) Creating domain...
	I0425 19:44:55.011339   53123 main.go:141] libmachine: (force-systemd-flag-543895) define libvirt domain using xml: 
	I0425 19:44:55.011365   53123 main.go:141] libmachine: (force-systemd-flag-543895) <domain type='kvm'>
	I0425 19:44:55.011377   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <name>force-systemd-flag-543895</name>
	I0425 19:44:55.011389   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <memory unit='MiB'>2048</memory>
	I0425 19:44:55.011400   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <vcpu>2</vcpu>
	I0425 19:44:55.011406   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <features>
	I0425 19:44:55.011420   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <acpi/>
	I0425 19:44:55.011426   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <apic/>
	I0425 19:44:55.011434   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <pae/>
	I0425 19:44:55.011447   53123 main.go:141] libmachine: (force-systemd-flag-543895)     
	I0425 19:44:55.011459   53123 main.go:141] libmachine: (force-systemd-flag-543895)   </features>
	I0425 19:44:55.011470   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <cpu mode='host-passthrough'>
	I0425 19:44:55.011477   53123 main.go:141] libmachine: (force-systemd-flag-543895)   
	I0425 19:44:55.011494   53123 main.go:141] libmachine: (force-systemd-flag-543895)   </cpu>
	I0425 19:44:55.011506   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <os>
	I0425 19:44:55.011516   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <type>hvm</type>
	I0425 19:44:55.011541   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <boot dev='cdrom'/>
	I0425 19:44:55.011552   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <boot dev='hd'/>
	I0425 19:44:55.011564   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <bootmenu enable='no'/>
	I0425 19:44:55.011572   53123 main.go:141] libmachine: (force-systemd-flag-543895)   </os>
	I0425 19:44:55.011584   53123 main.go:141] libmachine: (force-systemd-flag-543895)   <devices>
	I0425 19:44:55.011595   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <disk type='file' device='cdrom'>
	I0425 19:44:55.011609   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/boot2docker.iso'/>
	I0425 19:44:55.011621   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <target dev='hdc' bus='scsi'/>
	I0425 19:44:55.011630   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <readonly/>
	I0425 19:44:55.011642   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </disk>
	I0425 19:44:55.011651   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <disk type='file' device='disk'>
	I0425 19:44:55.011660   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 19:44:55.011685   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/force-systemd-flag-543895.rawdisk'/>
	I0425 19:44:55.011699   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <target dev='hda' bus='virtio'/>
	I0425 19:44:55.011710   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </disk>
	I0425 19:44:55.011718   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <interface type='network'>
	I0425 19:44:55.011730   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <source network='mk-force-systemd-flag-543895'/>
	I0425 19:44:55.011741   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <model type='virtio'/>
	I0425 19:44:55.011756   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </interface>
	I0425 19:44:55.011774   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <interface type='network'>
	I0425 19:44:55.011783   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <source network='default'/>
	I0425 19:44:55.011796   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <model type='virtio'/>
	I0425 19:44:55.011804   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </interface>
	I0425 19:44:55.011817   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <serial type='pty'>
	I0425 19:44:55.011826   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <target port='0'/>
	I0425 19:44:55.011836   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </serial>
	I0425 19:44:55.011844   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <console type='pty'>
	I0425 19:44:55.011854   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <target type='serial' port='0'/>
	I0425 19:44:55.011864   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </console>
	I0425 19:44:55.011873   53123 main.go:141] libmachine: (force-systemd-flag-543895)     <rng model='virtio'>
	I0425 19:44:55.011885   53123 main.go:141] libmachine: (force-systemd-flag-543895)       <backend model='random'>/dev/random</backend>
	I0425 19:44:55.011894   53123 main.go:141] libmachine: (force-systemd-flag-543895)     </rng>
	I0425 19:44:55.011903   53123 main.go:141] libmachine: (force-systemd-flag-543895)     
	I0425 19:44:55.011917   53123 main.go:141] libmachine: (force-systemd-flag-543895)     
	I0425 19:44:55.011928   53123 main.go:141] libmachine: (force-systemd-flag-543895)   </devices>
	I0425 19:44:55.011937   53123 main.go:141] libmachine: (force-systemd-flag-543895) </domain>
	I0425 19:44:55.011950   53123 main.go:141] libmachine: (force-systemd-flag-543895) 
	I0425 19:44:55.016713   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:a7:1e:8f in network default
	I0425 19:44:55.017482   53123 main.go:141] libmachine: (force-systemd-flag-543895) Ensuring networks are active...
	I0425 19:44:55.017507   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:55.018334   53123 main.go:141] libmachine: (force-systemd-flag-543895) Ensuring network default is active
	I0425 19:44:55.018792   53123 main.go:141] libmachine: (force-systemd-flag-543895) Ensuring network mk-force-systemd-flag-543895 is active
	I0425 19:44:55.019503   53123 main.go:141] libmachine: (force-systemd-flag-543895) Getting domain xml...
	I0425 19:44:55.020438   53123 main.go:141] libmachine: (force-systemd-flag-543895) Creating domain...
	I0425 19:44:56.331770   53123 main.go:141] libmachine: (force-systemd-flag-543895) Waiting to get IP...
	I0425 19:44:56.332562   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:56.333024   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:56.333069   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:56.333012   53269 retry.go:31] will retry after 255.936503ms: waiting for machine to come up
	I0425 19:44:56.590529   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:56.591052   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:56.591081   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:56.590995   53269 retry.go:31] will retry after 336.470709ms: waiting for machine to come up
	I0425 19:44:56.929686   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:56.930251   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:56.930278   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:56.930193   53269 retry.go:31] will retry after 450.038265ms: waiting for machine to come up
	I0425 19:44:57.381527   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:57.404567   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:57.404603   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:57.404484   53269 retry.go:31] will retry after 605.49286ms: waiting for machine to come up
	I0425 19:44:58.011206   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:58.011713   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:58.011742   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:58.011665   53269 retry.go:31] will retry after 497.146273ms: waiting for machine to come up
	I0425 19:44:58.261590   53486 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0425 19:44:58.377531   53486 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0425 19:44:58.377666   53486 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/NoKubernetes-335371/config.json ...
	I0425 19:44:58.377703   53486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/NoKubernetes-335371/config.json: {Name:mk6254d0d533222ac67230aff9d54ab2c7ed994f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:44:58.377862   53486 start.go:360] acquireMachinesLock for NoKubernetes-335371: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:44:58.510228   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:58.510704   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:58.510736   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:58.510667   53269 retry.go:31] will retry after 642.287101ms: waiting for machine to come up
	I0425 19:44:59.154439   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:44:59.155150   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:44:59.155177   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:44:59.155076   53269 retry.go:31] will retry after 1.15090394s: waiting for machine to come up
	I0425 19:45:00.307857   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:00.308224   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:00.308253   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:00.308182   53269 retry.go:31] will retry after 1.418985934s: waiting for machine to come up
	I0425 19:45:01.728805   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:01.729255   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:01.729284   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:01.729214   53269 retry.go:31] will retry after 1.793205224s: waiting for machine to come up
	I0425 19:45:01.703523   52810 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e 537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0 bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24 e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4 18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b 531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab c4066d8d6033ed321eb01d5a08d8b9a6c32eee002a442d7a0b7fad50a5aae689 28e68767d7b7898448d0882481acf39693721439e1aa0dcfd4f5447af85516ad: (10.521847877s)
	W0425 19:45:01.703597   52810 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e 537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0 bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24 e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4 18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b 531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab c4066d8d6033ed321eb01d5a08d8b9a6c32eee002a442d7a0b7fad50a5aae689 28e68767d7b7898448d0882481acf39693721439e1aa0dcfd4f5447af85516ad: Process exited with status 1
	stdout:
	15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c
	ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e
	537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb
	ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0
	bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24
	e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4
	
	stderr:
	E0425 19:45:01.695742    2907 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b\": container with ID starting with 18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b not found: ID does not exist" containerID="18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b"
	time="2024-04-25T19:45:01Z" level=fatal msg="stopping the container \"18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b\": rpc error: code = NotFound desc = could not find container \"18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b\": container with ID starting with 18fa8e1e01c9706c637c7aabd08e76fab2234308cbcbc6a9341d6cb757ef8f7b not found: ID does not exist"
	I0425 19:45:01.703678   52810 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 19:45:01.753450   52810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 19:45:01.767134   52810 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Apr 25 19:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Apr 25 19:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Apr 25 19:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Apr 25 19:43 /etc/kubernetes/scheduler.conf
	
	I0425 19:45:01.767201   52810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 19:45:01.778515   52810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 19:45:01.789099   52810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 19:45:01.799679   52810 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0425 19:45:01.799733   52810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 19:45:01.810618   52810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 19:45:01.821081   52810 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0425 19:45:01.821141   52810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 19:45:01.832444   52810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 19:45:01.843520   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:01.912718   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:03.234848   52810 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.322094337s)
	I0425 19:45:03.234890   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:03.492106   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:03.582586   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:03.704760   52810 api_server.go:52] waiting for apiserver process to appear ...
	I0425 19:45:03.704821   52810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 19:45:04.205060   52810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 19:45:04.705093   52810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 19:45:04.723793   52810 api_server.go:72] duration metric: took 1.019031032s to wait for apiserver process to appear ...
	I0425 19:45:04.723823   52810 api_server.go:88] waiting for apiserver healthz status ...
	I0425 19:45:04.723845   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:03.524661   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:03.525142   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:03.525170   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:03.525088   53269 retry.go:31] will retry after 1.80199974s: waiting for machine to come up
	I0425 19:45:05.328636   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:05.329127   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:05.329199   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:05.329119   53269 retry.go:31] will retry after 2.421701866s: waiting for machine to come up
	I0425 19:45:07.753032   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:07.753519   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:07.753552   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:07.753459   53269 retry.go:31] will retry after 3.092699852s: waiting for machine to come up
	I0425 19:45:07.292517   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 19:45:07.292547   52810 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 19:45:07.292580   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:07.336902   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 19:45:07.336947   52810 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 19:45:07.724465   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:07.731221   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 19:45:07.731249   52810 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 19:45:08.224602   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:08.229337   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 19:45:08.229364   52810 api_server.go:103] status: https://192.168.61.146:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 19:45:08.723911   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:08.729810   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I0425 19:45:08.738361   52810 api_server.go:141] control plane version: v1.30.0
	I0425 19:45:08.738391   52810 api_server.go:131] duration metric: took 4.014560379s to wait for apiserver health ...
	I0425 19:45:08.738402   52810 cni.go:84] Creating CNI manager for ""
	I0425 19:45:08.738409   52810 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:45:08.740103   52810 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 19:45:08.741447   52810 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 19:45:08.762024   52810 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 19:45:08.795487   52810 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 19:45:08.808763   52810 system_pods.go:59] 7 kube-system pods found
	I0425 19:45:08.808794   52810 system_pods.go:61] "coredns-7db6d8ff4d-g4zcp" [d9d92885-9821-488c-bb93-a4a35d60fb1a] Running
	I0425 19:45:08.808809   52810 system_pods.go:61] "coredns-7db6d8ff4d-x667t" [e764791e-c170-49f4-b844-668b59f31072] Running
	I0425 19:45:08.808844   52810 system_pods.go:61] "etcd-pause-762664" [7f83a16c-07d2-4c41-b029-9e022a962f8b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 19:45:08.808858   52810 system_pods.go:61] "kube-apiserver-pause-762664" [8b442b86-8626-4b72-8583-36c3e2617faa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 19:45:08.808878   52810 system_pods.go:61] "kube-controller-manager-pause-762664" [0d731a16-9799-4916-8ce7-10b8b38657a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 19:45:08.808889   52810 system_pods.go:61] "kube-proxy-j2lhr" [3bb81443-7890-4887-9031-5a05eba9d67d] Running
	I0425 19:45:08.808908   52810 system_pods.go:61] "kube-scheduler-pause-762664" [98bb7678-6066-4fc0-ab0c-c90b36ac5339] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 19:45:08.808920   52810 system_pods.go:74] duration metric: took 13.412055ms to wait for pod list to return data ...
	I0425 19:45:08.808933   52810 node_conditions.go:102] verifying NodePressure condition ...
	I0425 19:45:08.814941   52810 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 19:45:08.814970   52810 node_conditions.go:123] node cpu capacity is 2
	I0425 19:45:08.814983   52810 node_conditions.go:105] duration metric: took 6.041316ms to run NodePressure ...
	I0425 19:45:08.815010   52810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 19:45:09.125891   52810 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 19:45:09.131443   52810 kubeadm.go:733] kubelet initialised
	I0425 19:45:09.131467   52810 kubeadm.go:734] duration metric: took 5.545845ms waiting for restarted kubelet to initialise ...
	I0425 19:45:09.131485   52810 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 19:45:09.140572   52810 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:09.152710   52810 pod_ready.go:92] pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:09.152732   52810 pod_ready.go:81] duration metric: took 12.135152ms for pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:09.152740   52810 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:09.158698   52810 pod_ready.go:92] pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:09.158724   52810 pod_ready.go:81] duration metric: took 5.976825ms for pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:09.158736   52810 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:10.848012   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:10.848495   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:10.848521   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:10.848412   53269 retry.go:31] will retry after 3.812029793s: waiting for machine to come up
	I0425 19:45:11.166530   52810 pod_ready.go:102] pod "etcd-pause-762664" in "kube-system" namespace has status "Ready":"False"
	I0425 19:45:13.166654   52810 pod_ready.go:102] pod "etcd-pause-762664" in "kube-system" namespace has status "Ready":"False"
	I0425 19:45:15.667160   52810 pod_ready.go:102] pod "etcd-pause-762664" in "kube-system" namespace has status "Ready":"False"
	I0425 19:45:14.662581   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:14.662992   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find current IP address of domain force-systemd-flag-543895 in network mk-force-systemd-flag-543895
	I0425 19:45:14.663023   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | I0425 19:45:14.662957   53269 retry.go:31] will retry after 4.124167035s: waiting for machine to come up
	I0425 19:45:20.427389   53486 start.go:364] duration metric: took 22.049509061s to acquireMachinesLock for "NoKubernetes-335371"
	I0425 19:45:20.427426   53486 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-335371 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-335371 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 19:45:20.427558   53486 start.go:125] createHost starting for "" (driver="kvm2")
	I0425 19:45:17.665162   52810 pod_ready.go:92] pod "etcd-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:17.665195   52810 pod_ready.go:81] duration metric: took 8.5064499s for pod "etcd-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:17.665211   52810 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:19.671910   52810 pod_ready.go:92] pod "kube-apiserver-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:19.671938   52810 pod_ready.go:81] duration metric: took 2.006716953s for pod "kube-apiserver-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:19.671950   52810 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:18.791284   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:18.791687   53123 main.go:141] libmachine: (force-systemd-flag-543895) Found IP for machine: 192.168.50.9
	I0425 19:45:18.791709   53123 main.go:141] libmachine: (force-systemd-flag-543895) Reserving static IP address...
	I0425 19:45:18.791723   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has current primary IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:18.792111   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | unable to find host DHCP lease matching {name: "force-systemd-flag-543895", mac: "52:54:00:b7:de:a4", ip: "192.168.50.9"} in network mk-force-systemd-flag-543895
	I0425 19:45:18.867401   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Getting to WaitForSSH function...
	I0425 19:45:18.867435   53123 main.go:141] libmachine: (force-systemd-flag-543895) Reserved static IP address: 192.168.50.9
	I0425 19:45:18.867455   53123 main.go:141] libmachine: (force-systemd-flag-543895) Waiting for SSH to be available...
	I0425 19:45:18.870009   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:18.870503   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:18.870534   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:18.870616   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Using SSH client type: external
	I0425 19:45:18.870647   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa (-rw-------)
	I0425 19:45:18.870675   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 19:45:18.870694   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | About to run SSH command:
	I0425 19:45:18.870711   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | exit 0
	I0425 19:45:19.002512   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | SSH cmd err, output: <nil>: 
	I0425 19:45:19.002783   53123 main.go:141] libmachine: (force-systemd-flag-543895) KVM machine creation complete!
	I0425 19:45:19.003056   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetConfigRaw
	I0425 19:45:19.003524   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:19.003741   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:19.003880   53123 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 19:45:19.003891   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetState
	I0425 19:45:19.005046   53123 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 19:45:19.005062   53123 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 19:45:19.005069   53123 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 19:45:19.005078   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.007782   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.008194   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.008225   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.008375   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.008535   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.008702   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.008840   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.008979   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:19.009191   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:19.009203   53123 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 19:45:19.121831   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:45:19.121855   53123 main.go:141] libmachine: Detecting the provisioner...
	I0425 19:45:19.121875   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.124641   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.125047   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.125076   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.125336   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.125571   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.125726   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.125848   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.126011   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:19.126178   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:19.126189   53123 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 19:45:19.239579   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 19:45:19.239642   53123 main.go:141] libmachine: found compatible host: buildroot
	I0425 19:45:19.239650   53123 main.go:141] libmachine: Provisioning with buildroot...
	I0425 19:45:19.239658   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetMachineName
	I0425 19:45:19.239879   53123 buildroot.go:166] provisioning hostname "force-systemd-flag-543895"
	I0425 19:45:19.239903   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetMachineName
	I0425 19:45:19.240108   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.242714   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.243080   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.243141   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.243269   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.243478   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.243629   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.243773   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.243951   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:19.244179   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:19.244196   53123 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-543895 && echo "force-systemd-flag-543895" | sudo tee /etc/hostname
	I0425 19:45:19.375446   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-543895
	
	I0425 19:45:19.375480   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.377960   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.378341   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.378375   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.378554   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.378740   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.378918   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.379061   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.379238   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:19.379438   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:19.379469   53123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-543895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-543895/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-543895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 19:45:19.503988   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:45:19.504021   53123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 19:45:19.504079   53123 buildroot.go:174] setting up certificates
	I0425 19:45:19.504095   53123 provision.go:84] configureAuth start
	I0425 19:45:19.504120   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetMachineName
	I0425 19:45:19.504401   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetIP
	I0425 19:45:19.507198   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.507555   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.507576   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.507735   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.509992   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.510344   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.510382   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.510560   53123 provision.go:143] copyHostCerts
	I0425 19:45:19.510601   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:45:19.510633   53123 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 19:45:19.510642   53123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:45:19.510702   53123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 19:45:19.510790   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:45:19.510807   53123 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 19:45:19.510813   53123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:45:19.510838   53123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 19:45:19.510893   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:45:19.510909   53123 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 19:45:19.510915   53123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:45:19.510936   53123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 19:45:19.510992   53123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-543895 san=[127.0.0.1 192.168.50.9 force-systemd-flag-543895 localhost minikube]
	I0425 19:45:19.693616   53123 provision.go:177] copyRemoteCerts
	I0425 19:45:19.693665   53123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 19:45:19.693693   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.696338   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.696707   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.696740   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.696926   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.697109   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.697286   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.697424   53123 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa Username:docker}
	I0425 19:45:19.787717   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0425 19:45:19.787789   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 19:45:19.815043   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0425 19:45:19.815114   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0425 19:45:19.843973   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0425 19:45:19.844045   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 19:45:19.871108   53123 provision.go:87] duration metric: took 366.993271ms to configureAuth
	I0425 19:45:19.871134   53123 buildroot.go:189] setting minikube options for container-runtime
	I0425 19:45:19.871345   53123 config.go:182] Loaded profile config "force-systemd-flag-543895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:45:19.871422   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:19.874074   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.874485   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:19.874515   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:19.874711   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:19.874940   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.875140   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:19.875320   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:19.875492   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:19.875706   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:19.875724   53123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 19:45:20.160160   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 19:45:20.160197   53123 main.go:141] libmachine: Checking connection to Docker...
	I0425 19:45:20.160209   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetURL
	I0425 19:45:20.161574   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | Using libvirt version 6000000
	I0425 19:45:20.163858   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.164182   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.164219   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.164357   53123 main.go:141] libmachine: Docker is up and running!
	I0425 19:45:20.164371   53123 main.go:141] libmachine: Reticulating splines...
	I0425 19:45:20.164378   53123 client.go:171] duration metric: took 25.943084638s to LocalClient.Create
	I0425 19:45:20.164398   53123 start.go:167] duration metric: took 25.943137001s to libmachine.API.Create "force-systemd-flag-543895"
	I0425 19:45:20.164411   53123 start.go:293] postStartSetup for "force-systemd-flag-543895" (driver="kvm2")
	I0425 19:45:20.164419   53123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 19:45:20.164435   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:20.164672   53123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 19:45:20.164693   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:20.166592   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.166936   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.166968   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.167110   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:20.167312   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:20.167483   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:20.167662   53123 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa Username:docker}
	I0425 19:45:20.256673   53123 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 19:45:20.262372   53123 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 19:45:20.262397   53123 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 19:45:20.262458   53123 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 19:45:20.262536   53123 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 19:45:20.262545   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /etc/ssl/certs/136822.pem
	I0425 19:45:20.262661   53123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 19:45:20.273631   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:45:20.305134   53123 start.go:296] duration metric: took 140.709387ms for postStartSetup
	I0425 19:45:20.305193   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetConfigRaw
	I0425 19:45:20.305840   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetIP
	I0425 19:45:20.308241   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.308636   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.308669   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.308892   53123 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/config.json ...
	I0425 19:45:20.309064   53123 start.go:128] duration metric: took 26.113246135s to createHost
	I0425 19:45:20.309086   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:20.311364   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.311745   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.311773   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.311908   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:20.312058   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:20.312194   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:20.312358   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:20.312511   53123 main.go:141] libmachine: Using SSH client type: native
	I0425 19:45:20.312720   53123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.9 22 <nil> <nil>}
	I0425 19:45:20.312736   53123 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 19:45:20.427213   53123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714074320.376678989
	
	I0425 19:45:20.427247   53123 fix.go:216] guest clock: 1714074320.376678989
	I0425 19:45:20.427262   53123 fix.go:229] Guest: 2024-04-25 19:45:20.376678989 +0000 UTC Remote: 2024-04-25 19:45:20.309074769 +0000 UTC m=+46.952429901 (delta=67.60422ms)
	I0425 19:45:20.427307   53123 fix.go:200] guest clock delta is within tolerance: 67.60422ms
	I0425 19:45:20.427314   53123 start.go:83] releasing machines lock for "force-systemd-flag-543895", held for 26.231666855s
	I0425 19:45:20.427348   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:20.427601   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetIP
	I0425 19:45:20.430778   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.431219   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.431259   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.431409   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:20.431990   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:20.432198   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .DriverName
	I0425 19:45:20.432271   53123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 19:45:20.432331   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:20.432487   53123 ssh_runner.go:195] Run: cat /version.json
	I0425 19:45:20.432511   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHHostname
	I0425 19:45:20.435553   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.435850   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.435937   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.435964   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.436074   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:20.436231   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:20.436249   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:20.436271   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:20.436421   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:20.436469   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHPort
	I0425 19:45:20.436638   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHKeyPath
	I0425 19:45:20.436652   53123 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa Username:docker}
	I0425 19:45:20.436771   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetSSHUsername
	I0425 19:45:20.436902   53123 sshutil.go:53] new ssh client: &{IP:192.168.50.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/force-systemd-flag-543895/id_rsa Username:docker}
	I0425 19:45:20.554724   53123 ssh_runner.go:195] Run: systemctl --version
	I0425 19:45:20.562146   53123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 19:45:20.733216   53123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 19:45:20.740269   53123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 19:45:20.740344   53123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 19:45:20.760248   53123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 19:45:20.760273   53123 start.go:494] detecting cgroup driver to use...
	I0425 19:45:20.760287   53123 start.go:498] using "systemd" cgroup driver as enforced via flags
	I0425 19:45:20.760345   53123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 19:45:20.780769   53123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 19:45:20.797615   53123 docker.go:217] disabling cri-docker service (if available) ...
	I0425 19:45:20.797674   53123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 19:45:20.812625   53123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 19:45:20.827812   53123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 19:45:20.952544   53123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 19:45:21.117205   53123 docker.go:233] disabling docker service ...
	I0425 19:45:21.117281   53123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 19:45:21.134892   53123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 19:45:21.148591   53123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 19:45:21.300879   53123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 19:45:21.441087   53123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 19:45:21.457693   53123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 19:45:21.478801   53123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 19:45:21.478852   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.489647   53123 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0425 19:45:21.489697   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.500624   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.511293   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.523532   53123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 19:45:21.536288   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.549316   53123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.576487   53123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:45:21.592040   53123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 19:45:21.603959   53123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 19:45:21.604031   53123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 19:45:21.620494   53123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 19:45:21.633054   53123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:45:21.767677   53123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 19:45:21.925024   53123 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 19:45:21.925085   53123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 19:45:21.931184   53123 start.go:562] Will wait 60s for crictl version
	I0425 19:45:21.931238   53123 ssh_runner.go:195] Run: which crictl
	I0425 19:45:21.936183   53123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 19:45:21.979349   53123 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 19:45:21.979431   53123 ssh_runner.go:195] Run: crio --version
	I0425 19:45:22.014991   53123 ssh_runner.go:195] Run: crio --version
	I0425 19:45:22.051079   53123 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 19:45:20.430001   53486 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0425 19:45:20.430244   53486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:45:20.430284   53486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:45:20.449106   53486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45377
	I0425 19:45:20.449481   53486 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:45:20.450071   53486 main.go:141] libmachine: Using API Version  1
	I0425 19:45:20.450088   53486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:45:20.450481   53486 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:45:20.450667   53486 main.go:141] libmachine: (NoKubernetes-335371) Calling .GetMachineName
	I0425 19:45:20.450806   53486 main.go:141] libmachine: (NoKubernetes-335371) Calling .DriverName
	I0425 19:45:20.450946   53486 start.go:159] libmachine.API.Create for "NoKubernetes-335371" (driver="kvm2")
	I0425 19:45:20.450963   53486 client.go:168] LocalClient.Create starting
	I0425 19:45:20.450983   53486 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 19:45:20.451005   53486 main.go:141] libmachine: Decoding PEM data...
	I0425 19:45:20.451015   53486 main.go:141] libmachine: Parsing certificate...
	I0425 19:45:20.451053   53486 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 19:45:20.451066   53486 main.go:141] libmachine: Decoding PEM data...
	I0425 19:45:20.451073   53486 main.go:141] libmachine: Parsing certificate...
	I0425 19:45:20.451097   53486 main.go:141] libmachine: Running pre-create checks...
	I0425 19:45:20.451102   53486 main.go:141] libmachine: (NoKubernetes-335371) Calling .PreCreateCheck
	I0425 19:45:20.451546   53486 main.go:141] libmachine: (NoKubernetes-335371) Calling .GetConfigRaw
	I0425 19:45:20.452049   53486 main.go:141] libmachine: Creating machine...
	I0425 19:45:20.452058   53486 main.go:141] libmachine: (NoKubernetes-335371) Calling .Create
	I0425 19:45:20.452177   53486 main.go:141] libmachine: (NoKubernetes-335371) Creating KVM machine...
	I0425 19:45:20.453503   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | found existing default KVM network
	I0425 19:45:20.455198   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:20.455044   53612 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0f0}
	I0425 19:45:20.455239   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | created network xml: 
	I0425 19:45:20.455253   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | <network>
	I0425 19:45:20.455263   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   <name>mk-NoKubernetes-335371</name>
	I0425 19:45:20.455269   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   <dns enable='no'/>
	I0425 19:45:20.455277   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   
	I0425 19:45:20.455285   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0425 19:45:20.455292   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |     <dhcp>
	I0425 19:45:20.455299   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0425 19:45:20.455313   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |     </dhcp>
	I0425 19:45:20.455319   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   </ip>
	I0425 19:45:20.455326   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG |   
	I0425 19:45:20.455336   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | </network>
	I0425 19:45:20.455349   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | 
	I0425 19:45:20.461595   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | trying to create private KVM network mk-NoKubernetes-335371 192.168.39.0/24...
	I0425 19:45:20.535365   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | private KVM network mk-NoKubernetes-335371 192.168.39.0/24 created
	I0425 19:45:20.535404   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:20.535349   53612 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:45:20.535439   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371 ...
	I0425 19:45:20.535467   53486 main.go:141] libmachine: (NoKubernetes-335371) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 19:45:20.535485   53486 main.go:141] libmachine: (NoKubernetes-335371) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 19:45:20.791446   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:20.791313   53612 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371/id_rsa...
	I0425 19:45:20.911545   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:20.911389   53612 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371/NoKubernetes-335371.rawdisk...
	I0425 19:45:20.911565   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Writing magic tar header
	I0425 19:45:20.911581   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Writing SSH key tar header
	I0425 19:45:20.911592   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:20.911563   53612 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371 ...
	I0425 19:45:20.911739   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371
	I0425 19:45:20.911779   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 19:45:20.911793   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371 (perms=drwx------)
	I0425 19:45:20.911812   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 19:45:20.911823   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 19:45:20.911835   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 19:45:20.911845   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 19:45:20.911855   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:45:20.911865   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 19:45:20.911873   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 19:45:20.911883   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home/jenkins
	I0425 19:45:20.911896   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Checking permissions on dir: /home
	I0425 19:45:20.911907   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | Skipping /home - not owner
	I0425 19:45:20.911917   53486 main.go:141] libmachine: (NoKubernetes-335371) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 19:45:20.911931   53486 main.go:141] libmachine: (NoKubernetes-335371) Creating domain...
	I0425 19:45:20.913403   53486 main.go:141] libmachine: (NoKubernetes-335371) define libvirt domain using xml: 
	I0425 19:45:20.913414   53486 main.go:141] libmachine: (NoKubernetes-335371) <domain type='kvm'>
	I0425 19:45:20.913422   53486 main.go:141] libmachine: (NoKubernetes-335371)   <name>NoKubernetes-335371</name>
	I0425 19:45:20.913427   53486 main.go:141] libmachine: (NoKubernetes-335371)   <memory unit='MiB'>6000</memory>
	I0425 19:45:20.913433   53486 main.go:141] libmachine: (NoKubernetes-335371)   <vcpu>2</vcpu>
	I0425 19:45:20.913437   53486 main.go:141] libmachine: (NoKubernetes-335371)   <features>
	I0425 19:45:20.913443   53486 main.go:141] libmachine: (NoKubernetes-335371)     <acpi/>
	I0425 19:45:20.913447   53486 main.go:141] libmachine: (NoKubernetes-335371)     <apic/>
	I0425 19:45:20.913453   53486 main.go:141] libmachine: (NoKubernetes-335371)     <pae/>
	I0425 19:45:20.913459   53486 main.go:141] libmachine: (NoKubernetes-335371)     
	I0425 19:45:20.913465   53486 main.go:141] libmachine: (NoKubernetes-335371)   </features>
	I0425 19:45:20.913470   53486 main.go:141] libmachine: (NoKubernetes-335371)   <cpu mode='host-passthrough'>
	I0425 19:45:20.913475   53486 main.go:141] libmachine: (NoKubernetes-335371)   
	I0425 19:45:20.913480   53486 main.go:141] libmachine: (NoKubernetes-335371)   </cpu>
	I0425 19:45:20.913486   53486 main.go:141] libmachine: (NoKubernetes-335371)   <os>
	I0425 19:45:20.913490   53486 main.go:141] libmachine: (NoKubernetes-335371)     <type>hvm</type>
	I0425 19:45:20.913496   53486 main.go:141] libmachine: (NoKubernetes-335371)     <boot dev='cdrom'/>
	I0425 19:45:20.913500   53486 main.go:141] libmachine: (NoKubernetes-335371)     <boot dev='hd'/>
	I0425 19:45:20.913507   53486 main.go:141] libmachine: (NoKubernetes-335371)     <bootmenu enable='no'/>
	I0425 19:45:20.913511   53486 main.go:141] libmachine: (NoKubernetes-335371)   </os>
	I0425 19:45:20.913517   53486 main.go:141] libmachine: (NoKubernetes-335371)   <devices>
	I0425 19:45:20.913522   53486 main.go:141] libmachine: (NoKubernetes-335371)     <disk type='file' device='cdrom'>
	I0425 19:45:20.913532   53486 main.go:141] libmachine: (NoKubernetes-335371)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371/boot2docker.iso'/>
	I0425 19:45:20.913537   53486 main.go:141] libmachine: (NoKubernetes-335371)       <target dev='hdc' bus='scsi'/>
	I0425 19:45:20.913543   53486 main.go:141] libmachine: (NoKubernetes-335371)       <readonly/>
	I0425 19:45:20.913548   53486 main.go:141] libmachine: (NoKubernetes-335371)     </disk>
	I0425 19:45:20.913556   53486 main.go:141] libmachine: (NoKubernetes-335371)     <disk type='file' device='disk'>
	I0425 19:45:20.913564   53486 main.go:141] libmachine: (NoKubernetes-335371)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 19:45:20.913574   53486 main.go:141] libmachine: (NoKubernetes-335371)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/NoKubernetes-335371/NoKubernetes-335371.rawdisk'/>
	I0425 19:45:20.913600   53486 main.go:141] libmachine: (NoKubernetes-335371)       <target dev='hda' bus='virtio'/>
	I0425 19:45:20.913608   53486 main.go:141] libmachine: (NoKubernetes-335371)     </disk>
	I0425 19:45:20.913613   53486 main.go:141] libmachine: (NoKubernetes-335371)     <interface type='network'>
	I0425 19:45:20.913620   53486 main.go:141] libmachine: (NoKubernetes-335371)       <source network='mk-NoKubernetes-335371'/>
	I0425 19:45:20.913628   53486 main.go:141] libmachine: (NoKubernetes-335371)       <model type='virtio'/>
	I0425 19:45:20.913634   53486 main.go:141] libmachine: (NoKubernetes-335371)     </interface>
	I0425 19:45:20.913640   53486 main.go:141] libmachine: (NoKubernetes-335371)     <interface type='network'>
	I0425 19:45:20.913647   53486 main.go:141] libmachine: (NoKubernetes-335371)       <source network='default'/>
	I0425 19:45:20.913652   53486 main.go:141] libmachine: (NoKubernetes-335371)       <model type='virtio'/>
	I0425 19:45:20.913659   53486 main.go:141] libmachine: (NoKubernetes-335371)     </interface>
	I0425 19:45:20.913665   53486 main.go:141] libmachine: (NoKubernetes-335371)     <serial type='pty'>
	I0425 19:45:20.913672   53486 main.go:141] libmachine: (NoKubernetes-335371)       <target port='0'/>
	I0425 19:45:20.913677   53486 main.go:141] libmachine: (NoKubernetes-335371)     </serial>
	I0425 19:45:20.913684   53486 main.go:141] libmachine: (NoKubernetes-335371)     <console type='pty'>
	I0425 19:45:20.913689   53486 main.go:141] libmachine: (NoKubernetes-335371)       <target type='serial' port='0'/>
	I0425 19:45:20.913695   53486 main.go:141] libmachine: (NoKubernetes-335371)     </console>
	I0425 19:45:20.913701   53486 main.go:141] libmachine: (NoKubernetes-335371)     <rng model='virtio'>
	I0425 19:45:20.913709   53486 main.go:141] libmachine: (NoKubernetes-335371)       <backend model='random'>/dev/random</backend>
	I0425 19:45:20.913714   53486 main.go:141] libmachine: (NoKubernetes-335371)     </rng>
	I0425 19:45:20.913721   53486 main.go:141] libmachine: (NoKubernetes-335371)     
	I0425 19:45:20.913725   53486 main.go:141] libmachine: (NoKubernetes-335371)     
	I0425 19:45:20.913731   53486 main.go:141] libmachine: (NoKubernetes-335371)   </devices>
	I0425 19:45:20.913735   53486 main.go:141] libmachine: (NoKubernetes-335371) </domain>
	I0425 19:45:20.913746   53486 main.go:141] libmachine: (NoKubernetes-335371) 
	I0425 19:45:20.919033   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:0f:a2:b9 in network default
	I0425 19:45:20.919815   53486 main.go:141] libmachine: (NoKubernetes-335371) Ensuring networks are active...
	I0425 19:45:20.919830   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:20.920523   53486 main.go:141] libmachine: (NoKubernetes-335371) Ensuring network default is active
	I0425 19:45:20.920790   53486 main.go:141] libmachine: (NoKubernetes-335371) Ensuring network mk-NoKubernetes-335371 is active
	I0425 19:45:20.921306   53486 main.go:141] libmachine: (NoKubernetes-335371) Getting domain xml...
	I0425 19:45:20.922038   53486 main.go:141] libmachine: (NoKubernetes-335371) Creating domain...
	I0425 19:45:22.219916   53486 main.go:141] libmachine: (NoKubernetes-335371) Waiting to get IP...
	I0425 19:45:22.220690   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:22.221193   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:22.221232   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:22.221159   53612 retry.go:31] will retry after 245.322087ms: waiting for machine to come up
	I0425 19:45:22.468728   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:22.469457   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:22.469479   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:22.469417   53612 retry.go:31] will retry after 246.156953ms: waiting for machine to come up
	I0425 19:45:22.716943   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:22.717457   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:22.717475   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:22.717420   53612 retry.go:31] will retry after 421.840693ms: waiting for machine to come up
	I0425 19:45:23.141094   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:23.141650   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:23.141673   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:23.141551   53612 retry.go:31] will retry after 466.266362ms: waiting for machine to come up
	I0425 19:45:22.052461   53123 main.go:141] libmachine: (force-systemd-flag-543895) Calling .GetIP
	I0425 19:45:22.055668   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:22.056073   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:de:a4", ip: ""} in network mk-force-systemd-flag-543895: {Iface:virbr2 ExpiryTime:2024-04-25 20:45:10 +0000 UTC Type:0 Mac:52:54:00:b7:de:a4 Iaid: IPaddr:192.168.50.9 Prefix:24 Hostname:force-systemd-flag-543895 Clientid:01:52:54:00:b7:de:a4}
	I0425 19:45:22.056105   53123 main.go:141] libmachine: (force-systemd-flag-543895) DBG | domain force-systemd-flag-543895 has defined IP address 192.168.50.9 and MAC address 52:54:00:b7:de:a4 in network mk-force-systemd-flag-543895
	I0425 19:45:22.056349   53123 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0425 19:45:22.062952   53123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 19:45:22.081420   53123 kubeadm.go:877] updating cluster {Name:force-systemd-flag-543895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:force-systemd-flag-543895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.9 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 19:45:22.081511   53123 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 19:45:22.081560   53123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:45:22.124826   53123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 19:45:22.124897   53123 ssh_runner.go:195] Run: which lz4
	I0425 19:45:22.129811   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0425 19:45:22.129906   53123 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 19:45:22.134856   53123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 19:45:22.134886   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 19:45:21.679970   52810 pod_ready.go:102] pod "kube-controller-manager-pause-762664" in "kube-system" namespace has status "Ready":"False"
	I0425 19:45:23.680917   52810 pod_ready.go:102] pod "kube-controller-manager-pause-762664" in "kube-system" namespace has status "Ready":"False"
	I0425 19:45:24.181533   52810 pod_ready.go:92] pod "kube-controller-manager-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.181564   52810 pod_ready.go:81] duration metric: took 4.509605206s for pod "kube-controller-manager-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.181580   52810 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j2lhr" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.188218   52810 pod_ready.go:92] pod "kube-proxy-j2lhr" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.188252   52810 pod_ready.go:81] duration metric: took 6.655909ms for pod "kube-proxy-j2lhr" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.188267   52810 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.194046   52810 pod_ready.go:92] pod "kube-scheduler-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.194070   52810 pod_ready.go:81] duration metric: took 5.795243ms for pod "kube-scheduler-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.194080   52810 pod_ready.go:38] duration metric: took 15.062583775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 19:45:24.194100   52810 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 19:45:24.211876   52810 ops.go:34] apiserver oom_adj: -16
	I0425 19:45:24.211898   52810 kubeadm.go:591] duration metric: took 33.157499571s to restartPrimaryControlPlane
	I0425 19:45:24.211909   52810 kubeadm.go:393] duration metric: took 33.478359534s to StartCluster
	I0425 19:45:24.211929   52810 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:24.212017   52810 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:45:24.213355   52810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:24.213648   52810 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.146 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 19:45:24.215507   52810 out.go:177] * Verifying Kubernetes components...
	I0425 19:45:24.213731   52810 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 19:45:24.213901   52810 config.go:182] Loaded profile config "pause-762664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:45:24.216978   52810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:45:24.218624   52810 out.go:177] * Enabled addons: 
	I0425 19:45:24.219998   52810 addons.go:505] duration metric: took 6.278948ms for enable addons: enabled=[]
	I0425 19:45:24.450530   52810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:45:24.472557   52810 node_ready.go:35] waiting up to 6m0s for node "pause-762664" to be "Ready" ...
	I0425 19:45:24.476419   52810 node_ready.go:49] node "pause-762664" has status "Ready":"True"
	I0425 19:45:24.476442   52810 node_ready.go:38] duration metric: took 3.852908ms for node "pause-762664" to be "Ready" ...
	I0425 19:45:24.476459   52810 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 19:45:24.483828   52810 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.490861   52810 pod_ready.go:92] pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.490888   52810 pod_ready.go:81] duration metric: took 7.029776ms for pod "coredns-7db6d8ff4d-g4zcp" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.490899   52810 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.576371   52810 pod_ready.go:92] pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.576402   52810 pod_ready.go:81] duration metric: took 85.494399ms for pod "coredns-7db6d8ff4d-x667t" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.576417   52810 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.977391   52810 pod_ready.go:92] pod "etcd-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:24.977417   52810 pod_ready.go:81] duration metric: took 400.992911ms for pod "etcd-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:24.977429   52810 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:25.378287   52810 pod_ready.go:92] pod "kube-apiserver-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:25.378316   52810 pod_ready.go:81] duration metric: took 400.878843ms for pod "kube-apiserver-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:25.378330   52810 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:25.775693   52810 pod_ready.go:92] pod "kube-controller-manager-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:25.775725   52810 pod_ready.go:81] duration metric: took 397.385928ms for pod "kube-controller-manager-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:25.775740   52810 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j2lhr" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:26.177830   52810 pod_ready.go:92] pod "kube-proxy-j2lhr" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:26.177862   52810 pod_ready.go:81] duration metric: took 402.114436ms for pod "kube-proxy-j2lhr" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:26.177876   52810 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:26.577727   52810 pod_ready.go:92] pod "kube-scheduler-pause-762664" in "kube-system" namespace has status "Ready":"True"
	I0425 19:45:26.577756   52810 pod_ready.go:81] duration metric: took 399.871415ms for pod "kube-scheduler-pause-762664" in "kube-system" namespace to be "Ready" ...
	I0425 19:45:26.577767   52810 pod_ready.go:38] duration metric: took 2.101296019s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 19:45:26.577795   52810 api_server.go:52] waiting for apiserver process to appear ...
	I0425 19:45:26.577853   52810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 19:45:26.602819   52810 api_server.go:72] duration metric: took 2.389128704s to wait for apiserver process to appear ...
	I0425 19:45:26.602849   52810 api_server.go:88] waiting for apiserver healthz status ...
	I0425 19:45:26.602871   52810 api_server.go:253] Checking apiserver healthz at https://192.168.61.146:8443/healthz ...
	I0425 19:45:26.616642   52810 api_server.go:279] https://192.168.61.146:8443/healthz returned 200:
	ok
	I0425 19:45:26.618473   52810 api_server.go:141] control plane version: v1.30.0
	I0425 19:45:26.618501   52810 api_server.go:131] duration metric: took 15.644112ms to wait for apiserver health ...
	I0425 19:45:26.618511   52810 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 19:45:26.780634   52810 system_pods.go:59] 7 kube-system pods found
	I0425 19:45:26.780666   52810 system_pods.go:61] "coredns-7db6d8ff4d-g4zcp" [d9d92885-9821-488c-bb93-a4a35d60fb1a] Running
	I0425 19:45:26.780673   52810 system_pods.go:61] "coredns-7db6d8ff4d-x667t" [e764791e-c170-49f4-b844-668b59f31072] Running
	I0425 19:45:26.780678   52810 system_pods.go:61] "etcd-pause-762664" [7f83a16c-07d2-4c41-b029-9e022a962f8b] Running
	I0425 19:45:26.780682   52810 system_pods.go:61] "kube-apiserver-pause-762664" [8b442b86-8626-4b72-8583-36c3e2617faa] Running
	I0425 19:45:26.780686   52810 system_pods.go:61] "kube-controller-manager-pause-762664" [0d731a16-9799-4916-8ce7-10b8b38657a3] Running
	I0425 19:45:26.780699   52810 system_pods.go:61] "kube-proxy-j2lhr" [3bb81443-7890-4887-9031-5a05eba9d67d] Running
	I0425 19:45:26.780704   52810 system_pods.go:61] "kube-scheduler-pause-762664" [98bb7678-6066-4fc0-ab0c-c90b36ac5339] Running
	I0425 19:45:26.780712   52810 system_pods.go:74] duration metric: took 162.193444ms to wait for pod list to return data ...
	I0425 19:45:26.780721   52810 default_sa.go:34] waiting for default service account to be created ...
	I0425 19:45:26.976828   52810 default_sa.go:45] found service account: "default"
	I0425 19:45:26.976859   52810 default_sa.go:55] duration metric: took 196.130948ms for default service account to be created ...
	I0425 19:45:26.976871   52810 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 19:45:27.180325   52810 system_pods.go:86] 7 kube-system pods found
	I0425 19:45:27.180357   52810 system_pods.go:89] "coredns-7db6d8ff4d-g4zcp" [d9d92885-9821-488c-bb93-a4a35d60fb1a] Running
	I0425 19:45:27.180363   52810 system_pods.go:89] "coredns-7db6d8ff4d-x667t" [e764791e-c170-49f4-b844-668b59f31072] Running
	I0425 19:45:27.180367   52810 system_pods.go:89] "etcd-pause-762664" [7f83a16c-07d2-4c41-b029-9e022a962f8b] Running
	I0425 19:45:27.180372   52810 system_pods.go:89] "kube-apiserver-pause-762664" [8b442b86-8626-4b72-8583-36c3e2617faa] Running
	I0425 19:45:27.180376   52810 system_pods.go:89] "kube-controller-manager-pause-762664" [0d731a16-9799-4916-8ce7-10b8b38657a3] Running
	I0425 19:45:27.180382   52810 system_pods.go:89] "kube-proxy-j2lhr" [3bb81443-7890-4887-9031-5a05eba9d67d] Running
	I0425 19:45:27.180387   52810 system_pods.go:89] "kube-scheduler-pause-762664" [98bb7678-6066-4fc0-ab0c-c90b36ac5339] Running
	I0425 19:45:27.180395   52810 system_pods.go:126] duration metric: took 203.518429ms to wait for k8s-apps to be running ...
	I0425 19:45:27.180408   52810 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 19:45:27.180457   52810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 19:45:27.203127   52810 system_svc.go:56] duration metric: took 22.709129ms WaitForService to wait for kubelet
	I0425 19:45:27.203163   52810 kubeadm.go:576] duration metric: took 2.989476253s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:45:27.203185   52810 node_conditions.go:102] verifying NodePressure condition ...
	I0425 19:45:27.377626   52810 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 19:45:27.377657   52810 node_conditions.go:123] node cpu capacity is 2
	I0425 19:45:27.377666   52810 node_conditions.go:105] duration metric: took 174.476542ms to run NodePressure ...
	I0425 19:45:27.377677   52810 start.go:240] waiting for startup goroutines ...
	I0425 19:45:27.377683   52810 start.go:245] waiting for cluster config update ...
	I0425 19:45:27.377690   52810 start.go:254] writing updated cluster config ...
	I0425 19:45:27.393126   52810 ssh_runner.go:195] Run: rm -f paused
	I0425 19:45:27.445169   52810 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 19:45:27.596871   52810 out.go:177] * Done! kubectl is now configured to use "pause-762664" cluster and "default" namespace by default
	I0425 19:45:23.609316   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:23.609879   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:23.609905   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:23.609830   53612 retry.go:31] will retry after 694.530439ms: waiting for machine to come up
	I0425 19:45:24.305621   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:24.306085   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:24.306097   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:24.306039   53612 retry.go:31] will retry after 869.825254ms: waiting for machine to come up
	I0425 19:45:25.177950   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:25.178481   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:25.178502   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:25.178427   53612 retry.go:31] will retry after 737.309374ms: waiting for machine to come up
	I0425 19:45:25.917858   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:25.918595   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:25.918611   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:25.918493   53612 retry.go:31] will retry after 1.465177218s: waiting for machine to come up
	I0425 19:45:27.385064   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | domain NoKubernetes-335371 has defined MAC address 52:54:00:b7:85:0a in network mk-NoKubernetes-335371
	I0425 19:45:27.385546   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | unable to find current IP address of domain NoKubernetes-335371 in network mk-NoKubernetes-335371
	I0425 19:45:27.385562   53486 main.go:141] libmachine: (NoKubernetes-335371) DBG | I0425 19:45:27.385493   53612 retry.go:31] will retry after 1.813034414s: waiting for machine to come up
	I0425 19:45:23.912327   53123 crio.go:462] duration metric: took 1.782430898s to copy over tarball
	I0425 19:45:23.912427   53123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 19:45:26.684051   53123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.771593025s)
	I0425 19:45:26.684089   53123 crio.go:469] duration metric: took 2.771727474s to extract the tarball
	I0425 19:45:26.684102   53123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 19:45:26.725694   53123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:45:26.784338   53123 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 19:45:26.784359   53123 cache_images.go:84] Images are preloaded, skipping loading
	I0425 19:45:26.784368   53123 kubeadm.go:928] updating node { 192.168.50.9 8443 v1.30.0 crio true true} ...
	I0425 19:45:26.784490   53123 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-543895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-543895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 19:45:26.784569   53123 ssh_runner.go:195] Run: crio config
	I0425 19:45:26.849328   53123 cni.go:84] Creating CNI manager for ""
	I0425 19:45:26.849350   53123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:45:26.849361   53123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 19:45:26.849386   53123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.9 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-543895 NodeName:force-systemd-flag-543895 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.9 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 19:45:26.849535   53123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-543895"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 19:45:26.849608   53123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 19:45:26.866688   53123 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 19:45:26.866764   53123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 19:45:26.882521   53123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0425 19:45:26.906294   53123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 19:45:26.927866   53123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0425 19:45:26.947670   53123 ssh_runner.go:195] Run: grep 192.168.50.9	control-plane.minikube.internal$ /etc/hosts
	I0425 19:45:26.952548   53123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 19:45:26.966828   53123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:45:27.108519   53123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:45:27.129593   53123 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895 for IP: 192.168.50.9
	I0425 19:45:27.129686   53123 certs.go:194] generating shared ca certs ...
	I0425 19:45:27.129720   53123 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.129932   53123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 19:45:27.130003   53123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 19:45:27.130018   53123 certs.go:256] generating profile certs ...
	I0425 19:45:27.130094   53123 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.key
	I0425 19:45:27.130113   53123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.crt with IP's: []
	I0425 19:45:27.339949   53123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.crt ...
	I0425 19:45:27.339982   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.crt: {Name:mka21ce8700d96e7e2a7baac6295d37643d39833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.340163   53123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.key ...
	I0425 19:45:27.340178   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/client.key: {Name:mk3da2c7077072027206d875958a4c67e4437e2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.340278   53123 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key.82ee14db
	I0425 19:45:27.340295   53123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt.82ee14db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.9]
	I0425 19:45:27.540829   53123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt.82ee14db ...
	I0425 19:45:27.540860   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt.82ee14db: {Name:mk5bb7660e299cd7366c328e4da5caacef99ac61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.555043   53123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key.82ee14db ...
	I0425 19:45:27.555086   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key.82ee14db: {Name:mk49224be3d2af8bba4a61205ea3457dd2c420f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.555222   53123 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt.82ee14db -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt
	I0425 19:45:27.555310   53123 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key.82ee14db -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key
	I0425 19:45:27.555429   53123 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.key
	I0425 19:45:27.555449   53123 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.crt with IP's: []
	I0425 19:45:27.878266   53123 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.crt ...
	I0425 19:45:27.878308   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.crt: {Name:mk139287786559070679219ccc67a1aedf78e07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.878521   53123 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.key ...
	I0425 19:45:27.878552   53123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.key: {Name:mk97fc4bdbe252521c88b221264384505cbf2911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:45:27.878671   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0425 19:45:27.878698   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0425 19:45:27.878718   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0425 19:45:27.878746   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0425 19:45:27.878768   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0425 19:45:27.878791   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0425 19:45:27.878813   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0425 19:45:27.878835   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0425 19:45:27.878907   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 19:45:27.878963   53123 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 19:45:27.878979   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 19:45:27.879013   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 19:45:27.879049   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 19:45:27.879085   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 19:45:27.879150   53123 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:45:27.879204   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> /usr/share/ca-certificates/136822.pem
	I0425 19:45:27.879222   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:45:27.879239   53123 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem -> /usr/share/ca-certificates/13682.pem
	I0425 19:45:27.879909   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 19:45:27.919821   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 19:45:27.966263   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 19:45:28.006119   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 19:45:28.043783   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0425 19:45:28.075042   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 19:45:28.114717   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 19:45:28.152622   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/force-systemd-flag-543895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 19:45:28.192689   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 19:45:28.228647   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 19:45:28.257712   53123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 19:45:28.286541   53123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 19:45:28.309154   53123 ssh_runner.go:195] Run: openssl version
	I0425 19:45:28.316476   53123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 19:45:28.331554   53123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 19:45:28.338490   53123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:45:28.338545   53123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 19:45:28.347190   53123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 19:45:28.364050   53123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 19:45:28.380933   53123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:45:28.388170   53123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:45:28.388235   53123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:45:28.395054   53123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	
	
	==> CRI-O <==
	Apr 25 19:45:31 pause-762664 crio[2496]: time="2024-04-25 19:45:31.935771422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85f0f3551bac69e811415cfa0d495ebdc5b9f49409cae7becb932794f20a7f7e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074304165981795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52367164dab8a899fc7b1608d061d78e63e5b1be7e41d4dcf9ccb6bde2f27bf7,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074304153818475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e573cff37a1d889f0c091081889067a61cfab99fdd3b1dfd3934ef6b2e481aed,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074304130602073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f44775b3f5b755a6fc92c9d80b0ac7b2c13447e86f82b5d5b63dd1758cb6d06,PodSandboxId:f7b78dc49b4d5dd301d07d39f7c94b61ab2f5d8f12e463339263c00e580e3ace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714074295103647031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c63ed1a37e2bc42658a1e1eb2034458102ecbf8ec38cef2a8ae7a87507c37c,PodSandboxId:2b61a0ef18feb683cc3df6bf868bc06a7d34d83c838a9aef91eaaf5f4b325f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293533381925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb67105e46396566ae583a988c431d21e071f75697d23bd1ac8a3bfcb72ae03,PodSandboxId:786cf10e286c04f7911951463e9a98e2f467c9e79dab5768b91a619835e738fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293370659675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abe9ce14d8d9f3297db7653b706eb7076620562e28b6bb05be8be780daf4ca7,PodSandboxId:2a9bef34205e4a7e271253a069280737e27759acf70b88c2e56257f1b81572d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074290874165433,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074290800404366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074290766988399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074290669380161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762
664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0,PodSandboxId:2efdc1ea633beae5069e0de2197c59ca4bb48d90af87160c4ad87145cb1095c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714074253934831077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24,PodSandboxId:d9422001b9252d2fffb537fc620a587e6b06cc91e7d252a27046b3bb00716f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Sta
te:CONTAINER_EXITED,CreatedAt:1714074253929967712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4,PodSandboxId:e6ca87249707fda91783473e1c66fbcb661ae3296f85286084a2f760f577c224,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714074253111526962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab,PodSandboxId:5ec0be047ed337ea2ed0a1ace797074029d7603d0e83277f3a20c9f9aa311874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074233877722583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55096eac-e1bd-47bd-8ea9-e4cfdcf320b2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:31 pause-762664 crio[2496]: time="2024-04-25 19:45:31.937720806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10b2a409-2594-4cfb-9e4d-a889aa334ce6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:31 pause-762664 crio[2496]: time="2024-04-25 19:45:31.937823102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10b2a409-2594-4cfb-9e4d-a889aa334ce6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:31 pause-762664 crio[2496]: time="2024-04-25 19:45:31.938333180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85f0f3551bac69e811415cfa0d495ebdc5b9f49409cae7becb932794f20a7f7e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074304165981795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52367164dab8a899fc7b1608d061d78e63e5b1be7e41d4dcf9ccb6bde2f27bf7,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074304153818475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e573cff37a1d889f0c091081889067a61cfab99fdd3b1dfd3934ef6b2e481aed,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074304130602073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f44775b3f5b755a6fc92c9d80b0ac7b2c13447e86f82b5d5b63dd1758cb6d06,PodSandboxId:f7b78dc49b4d5dd301d07d39f7c94b61ab2f5d8f12e463339263c00e580e3ace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714074295103647031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c63ed1a37e2bc42658a1e1eb2034458102ecbf8ec38cef2a8ae7a87507c37c,PodSandboxId:2b61a0ef18feb683cc3df6bf868bc06a7d34d83c838a9aef91eaaf5f4b325f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293533381925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb67105e46396566ae583a988c431d21e071f75697d23bd1ac8a3bfcb72ae03,PodSandboxId:786cf10e286c04f7911951463e9a98e2f467c9e79dab5768b91a619835e738fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293370659675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abe9ce14d8d9f3297db7653b706eb7076620562e28b6bb05be8be780daf4ca7,PodSandboxId:2a9bef34205e4a7e271253a069280737e27759acf70b88c2e56257f1b81572d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074290874165433,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074290800404366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074290766988399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074290669380161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762
664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0,PodSandboxId:2efdc1ea633beae5069e0de2197c59ca4bb48d90af87160c4ad87145cb1095c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714074253934831077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24,PodSandboxId:d9422001b9252d2fffb537fc620a587e6b06cc91e7d252a27046b3bb00716f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Sta
te:CONTAINER_EXITED,CreatedAt:1714074253929967712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4,PodSandboxId:e6ca87249707fda91783473e1c66fbcb661ae3296f85286084a2f760f577c224,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714074253111526962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab,PodSandboxId:5ec0be047ed337ea2ed0a1ace797074029d7603d0e83277f3a20c9f9aa311874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074233877722583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10b2a409-2594-4cfb-9e4d-a889aa334ce6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.000242186Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce5a4eeb-bc34-4b18-a539-d527560c8043 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.000380457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce5a4eeb-bc34-4b18-a539-d527560c8043 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.002297225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=142aef6d-0cca-4177-88cc-b7eb2382edf6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.003176043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714074332003139703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=142aef6d-0cca-4177-88cc-b7eb2382edf6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.003764750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6170538-3f24-405d-a42b-04d9fe879843 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.003875136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6170538-3f24-405d-a42b-04d9fe879843 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.004409102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85f0f3551bac69e811415cfa0d495ebdc5b9f49409cae7becb932794f20a7f7e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074304165981795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52367164dab8a899fc7b1608d061d78e63e5b1be7e41d4dcf9ccb6bde2f27bf7,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074304153818475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e573cff37a1d889f0c091081889067a61cfab99fdd3b1dfd3934ef6b2e481aed,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074304130602073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f44775b3f5b755a6fc92c9d80b0ac7b2c13447e86f82b5d5b63dd1758cb6d06,PodSandboxId:f7b78dc49b4d5dd301d07d39f7c94b61ab2f5d8f12e463339263c00e580e3ace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714074295103647031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c63ed1a37e2bc42658a1e1eb2034458102ecbf8ec38cef2a8ae7a87507c37c,PodSandboxId:2b61a0ef18feb683cc3df6bf868bc06a7d34d83c838a9aef91eaaf5f4b325f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293533381925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb67105e46396566ae583a988c431d21e071f75697d23bd1ac8a3bfcb72ae03,PodSandboxId:786cf10e286c04f7911951463e9a98e2f467c9e79dab5768b91a619835e738fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293370659675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abe9ce14d8d9f3297db7653b706eb7076620562e28b6bb05be8be780daf4ca7,PodSandboxId:2a9bef34205e4a7e271253a069280737e27759acf70b88c2e56257f1b81572d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074290874165433,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074290800404366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074290766988399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074290669380161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762
664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0,PodSandboxId:2efdc1ea633beae5069e0de2197c59ca4bb48d90af87160c4ad87145cb1095c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714074253934831077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24,PodSandboxId:d9422001b9252d2fffb537fc620a587e6b06cc91e7d252a27046b3bb00716f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Sta
te:CONTAINER_EXITED,CreatedAt:1714074253929967712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4,PodSandboxId:e6ca87249707fda91783473e1c66fbcb661ae3296f85286084a2f760f577c224,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714074253111526962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab,PodSandboxId:5ec0be047ed337ea2ed0a1ace797074029d7603d0e83277f3a20c9f9aa311874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074233877722583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6170538-3f24-405d-a42b-04d9fe879843 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.075366596Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da90db3b-9161-44aa-8acf-956240385d94 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.075440048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da90db3b-9161-44aa-8acf-956240385d94 name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.078238808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=999eb377-a4d5-4416-9c4c-edd87efe859c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.078686103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714074332078655334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=999eb377-a4d5-4416-9c4c-edd87efe859c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.083146863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ce03c58-c130-4cb7-947c-cedc0dd59e63 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.083254673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ce03c58-c130-4cb7-947c-cedc0dd59e63 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.083655518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85f0f3551bac69e811415cfa0d495ebdc5b9f49409cae7becb932794f20a7f7e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074304165981795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52367164dab8a899fc7b1608d061d78e63e5b1be7e41d4dcf9ccb6bde2f27bf7,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074304153818475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e573cff37a1d889f0c091081889067a61cfab99fdd3b1dfd3934ef6b2e481aed,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074304130602073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f44775b3f5b755a6fc92c9d80b0ac7b2c13447e86f82b5d5b63dd1758cb6d06,PodSandboxId:f7b78dc49b4d5dd301d07d39f7c94b61ab2f5d8f12e463339263c00e580e3ace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714074295103647031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c63ed1a37e2bc42658a1e1eb2034458102ecbf8ec38cef2a8ae7a87507c37c,PodSandboxId:2b61a0ef18feb683cc3df6bf868bc06a7d34d83c838a9aef91eaaf5f4b325f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293533381925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb67105e46396566ae583a988c431d21e071f75697d23bd1ac8a3bfcb72ae03,PodSandboxId:786cf10e286c04f7911951463e9a98e2f467c9e79dab5768b91a619835e738fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293370659675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abe9ce14d8d9f3297db7653b706eb7076620562e28b6bb05be8be780daf4ca7,PodSandboxId:2a9bef34205e4a7e271253a069280737e27759acf70b88c2e56257f1b81572d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074290874165433,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074290800404366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074290766988399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074290669380161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762
664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0,PodSandboxId:2efdc1ea633beae5069e0de2197c59ca4bb48d90af87160c4ad87145cb1095c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714074253934831077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24,PodSandboxId:d9422001b9252d2fffb537fc620a587e6b06cc91e7d252a27046b3bb00716f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Sta
te:CONTAINER_EXITED,CreatedAt:1714074253929967712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4,PodSandboxId:e6ca87249707fda91783473e1c66fbcb661ae3296f85286084a2f760f577c224,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714074253111526962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab,PodSandboxId:5ec0be047ed337ea2ed0a1ace797074029d7603d0e83277f3a20c9f9aa311874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074233877722583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ce03c58-c130-4cb7-947c-cedc0dd59e63 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.151387698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f79d9e1-9313-491e-8046-ff6437e2b5be name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.151498902Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f79d9e1-9313-491e-8046-ff6437e2b5be name=/runtime.v1.RuntimeService/Version
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.154806543Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0691b38-a7a8-48ca-8b7c-08f81e19abec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.155619340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714074332155585266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0691b38-a7a8-48ca-8b7c-08f81e19abec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.156650186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddca3e51-8e45-46cb-8540-af87085140f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.156758157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ddca3e51-8e45-46cb-8540-af87085140f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 19:45:32 pause-762664 crio[2496]: time="2024-04-25 19:45:32.157317551Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:85f0f3551bac69e811415cfa0d495ebdc5b9f49409cae7becb932794f20a7f7e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714074304165981795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52367164dab8a899fc7b1608d061d78e63e5b1be7e41d4dcf9ccb6bde2f27bf7,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714074304153818475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e573cff37a1d889f0c091081889067a61cfab99fdd3b1dfd3934ef6b2e481aed,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714074304130602073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f44775b3f5b755a6fc92c9d80b0ac7b2c13447e86f82b5d5b63dd1758cb6d06,PodSandboxId:f7b78dc49b4d5dd301d07d39f7c94b61ab2f5d8f12e463339263c00e580e3ace,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714074295103647031,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4c63ed1a37e2bc42658a1e1eb2034458102ecbf8ec38cef2a8ae7a87507c37c,PodSandboxId:2b61a0ef18feb683cc3df6bf868bc06a7d34d83c838a9aef91eaaf5f4b325f19,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293533381925,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"m
etrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb67105e46396566ae583a988c431d21e071f75697d23bd1ac8a3bfcb72ae03,PodSandboxId:786cf10e286c04f7911951463e9a98e2f467c9e79dab5768b91a619835e738fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714074293370659675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1abe9ce14d8d9f3297db7653b706eb7076620562e28b6bb05be8be780daf4ca7,PodSandboxId:2a9bef34205e4a7e271253a069280737e27759acf70b88c2e56257f1b81572d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714074290874165433,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c,PodSandboxId:1c6d0a639b6809d3afa79cffaebc51ff0e5b37d6746cf01e4e7136fc5630aeae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714074290800404366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etc
d-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7e305b61832599bdd45bcabde73a32,},Annotations:map[string]string{io.kubernetes.container.hash: da495e4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e,PodSandboxId:08129017aad0c871e8ae8cbd507b60594644a7ae69b2644c77eff7c48a6826f6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714074290766988399,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ae50117f119bc1f2822a38375444e0,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb,PodSandboxId:6fb24dbb33709934d0a87f2c15e8d474a15d34339512fd796645adc94b00b1b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714074290669380161,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-762
664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d576325ea51f34aa54f82e656b7d0c4b,},Annotations:map[string]string{io.kubernetes.container.hash: a6e71565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0,PodSandboxId:2efdc1ea633beae5069e0de2197c59ca4bb48d90af87160c4ad87145cb1095c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714074253934831077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g4zcp,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: d9d92885-9821-488c-bb93-a4a35d60fb1a,},Annotations:map[string]string{io.kubernetes.container.hash: 2cb10923,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24,PodSandboxId:d9422001b9252d2fffb537fc620a587e6b06cc91e7d252a27046b3bb00716f22,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Sta
te:CONTAINER_EXITED,CreatedAt:1714074253929967712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-x667t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e764791e-c170-49f4-b844-668b59f31072,},Annotations:map[string]string{io.kubernetes.container.hash: 5a852880,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4,PodSandboxId:e6ca87249707fda91783473e1c66fbcb661ae3296f85286084a2f760f577c224,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf43
1fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714074253111526962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-j2lhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3bb81443-7890-4887-9031-5a05eba9d67d,},Annotations:map[string]string{io.kubernetes.container.hash: 66ff7341,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab,PodSandboxId:5ec0be047ed337ea2ed0a1ace797074029d7603d0e83277f3a20c9f9aa311874,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc
8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714074233877722583,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-762664,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c10dbe8e41e23687433c56a8bc40569,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ddca3e51-8e45-46cb-8540-af87085140f5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	85f0f3551bac6       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   28 seconds ago       Running             kube-controller-manager   2                   08129017aad0c       kube-controller-manager-pause-762664
	52367164dab8a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   28 seconds ago       Running             kube-apiserver            2                   6fb24dbb33709       kube-apiserver-pause-762664
	e573cff37a1d8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   28 seconds ago       Running             etcd                      2                   1c6d0a639b680       etcd-pause-762664
	8f44775b3f5b7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   37 seconds ago       Running             kube-proxy                1                   f7b78dc49b4d5       kube-proxy-j2lhr
	c4c63ed1a37e2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   38 seconds ago       Running             coredns                   1                   2b61a0ef18feb       coredns-7db6d8ff4d-x667t
	fbb67105e4639       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   38 seconds ago       Running             coredns                   1                   786cf10e286c0       coredns-7db6d8ff4d-g4zcp
	1abe9ce14d8d9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   41 seconds ago       Running             kube-scheduler            1                   2a9bef34205e4       kube-scheduler-pause-762664
	15ab66a925226       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   41 seconds ago       Exited              etcd                      1                   1c6d0a639b680       etcd-pause-762664
	ea8fe2b8ac695       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   41 seconds ago       Exited              kube-controller-manager   1                   08129017aad0c       kube-controller-manager-pause-762664
	537c5ceb06ae4       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   41 seconds ago       Exited              kube-apiserver            1                   6fb24dbb33709       kube-apiserver-pause-762664
	ed4edf4113dee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   2efdc1ea633be       coredns-7db6d8ff4d-g4zcp
	bcd6bfb37758f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   d9422001b9252       coredns-7db6d8ff4d-x667t
	e6076e0ade5f4       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   About a minute ago   Exited              kube-proxy                0                   e6ca87249707f       kube-proxy-j2lhr
	531f413370c15       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   About a minute ago   Exited              kube-scheduler            0                   5ec0be047ed33       kube-scheduler-pause-762664
	
	
	==> coredns [bcd6bfb37758f67e63138cc561df4463350515256ae85f387f9c8fe1f9289b24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48810 - 1009 "HINFO IN 2689421302928699323.7076540480638446432. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025733554s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c4c63ed1a37e2bc42658a1e1eb2034458102ecbf8ec38cef2a8ae7a87507c37c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44868 - 44166 "HINFO IN 8201741368956416230.8992973879895194046. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01948517s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35336->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35336->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35352->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35352->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35340->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:35340->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [ed4edf4113dee1a7c3be1c23a3428994fb7c83950e243d4051f89db9c62ef3f0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42639 - 42509 "HINFO IN 7364694214519880523.2155443123500162415. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026832263s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fbb67105e46396566ae583a988c431d21e071f75697d23bd1ac8a3bfcb72ae03] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49786 - 21332 "HINFO IN 6916482763460460802.2067125449859031103. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056799969s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42460->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42460->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42448->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42448->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42464->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:42464->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-762664
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-762664
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=pause-762664
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T19_43_59_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:43:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-762664
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 19:45:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 19:45:07 +0000   Thu, 25 Apr 2024 19:43:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 19:45:07 +0000   Thu, 25 Apr 2024 19:43:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 19:45:07 +0000   Thu, 25 Apr 2024 19:43:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 19:45:07 +0000   Thu, 25 Apr 2024 19:43:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.146
	  Hostname:    pause-762664
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 880ea3a0a3354ddcb3726e14da9330f0
	  System UUID:                880ea3a0-a335-4ddc-b372-6e14da9330f0
	  Boot ID:                    15a82cff-b5eb-4c35-9e06-91b786620d34
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-g4zcp                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     80s
	  kube-system                 coredns-7db6d8ff4d-x667t                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     80s
	  kube-system                 etcd-pause-762664                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         95s
	  kube-system                 kube-apiserver-pause-762664             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-pause-762664    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-j2lhr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-pause-762664             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (12%!)(MISSING)  340Mi (17%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 78s                kube-proxy       
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  99s                kubelet          Node pause-762664 status is now: NodeHasSufficientMemory
	  Normal  Starting                 94s                kubelet          Starting kubelet.
	  Normal  NodeReady                93s                kubelet          Node pause-762664 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  93s                kubelet          Node pause-762664 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s                kubelet          Node pause-762664 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s                kubelet          Node pause-762664 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  93s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           81s                node-controller  Node pause-762664 event: Registered Node pause-762664 in Controller
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node pause-762664 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet          Node pause-762664 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x7 over 29s)  kubelet          Node pause-762664 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                node-controller  Node pause-762664 event: Registered Node pause-762664 in Controller
	
	
	==> dmesg <==
	[  +0.067929] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.226727] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.158215] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.344638] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +5.142204] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +0.066531] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.166259] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.063175] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.020634] systemd-fstab-generator[1276]: Ignoring "noauto" option for root device
	[  +0.086689] kauditd_printk_skb: 69 callbacks suppressed
	[Apr25 19:44] systemd-fstab-generator[1494]: Ignoring "noauto" option for root device
	[  +0.167812] kauditd_printk_skb: 21 callbacks suppressed
	[ +29.990014] systemd-fstab-generator[2347]: Ignoring "noauto" option for root device
	[  +0.111217] kauditd_printk_skb: 90 callbacks suppressed
	[  +0.079396] systemd-fstab-generator[2359]: Ignoring "noauto" option for root device
	[  +0.217475] systemd-fstab-generator[2374]: Ignoring "noauto" option for root device
	[  +0.167019] systemd-fstab-generator[2386]: Ignoring "noauto" option for root device
	[  +0.454921] systemd-fstab-generator[2433]: Ignoring "noauto" option for root device
	[  +6.450715] systemd-fstab-generator[2580]: Ignoring "noauto" option for root device
	[  +0.074539] kauditd_printk_skb: 112 callbacks suppressed
	[  +5.109286] kauditd_printk_skb: 88 callbacks suppressed
	[Apr25 19:45] systemd-fstab-generator[3424]: Ignoring "noauto" option for root device
	[  +0.095983] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.437882] kauditd_printk_skb: 31 callbacks suppressed
	[  +3.390537] systemd-fstab-generator[3716]: Ignoring "noauto" option for root device
	
	
	==> etcd [15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c] <==
	
	
	==> etcd [e573cff37a1d889f0c091081889067a61cfab99fdd3b1dfd3934ef6b2e481aed] <==
	{"level":"info","ts":"2024-04-25T19:45:04.498198Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a63b81a8045c22a0","local-member-id":"52a637c8f882c7df","added-peer-id":"52a637c8f882c7df","added-peer-peer-urls":["https://192.168.61.146:2380"]}
	{"level":"info","ts":"2024-04-25T19:45:04.498354Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a63b81a8045c22a0","local-member-id":"52a637c8f882c7df","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:45:04.498421Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T19:45:04.509757Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-25T19:45:04.510203Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.146:2380"}
	{"level":"info","ts":"2024-04-25T19:45:04.512299Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.146:2380"}
	{"level":"info","ts":"2024-04-25T19:45:04.516328Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-25T19:45:04.516258Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"52a637c8f882c7df","initial-advertise-peer-urls":["https://192.168.61.146:2380"],"listen-peer-urls":["https://192.168.61.146:2380"],"advertise-client-urls":["https://192.168.61.146:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.146:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-25T19:45:05.559331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-25T19:45:05.559466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-25T19:45:05.559555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df received MsgPreVoteResp from 52a637c8f882c7df at term 2"}
	{"level":"info","ts":"2024-04-25T19:45:05.559605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df became candidate at term 3"}
	{"level":"info","ts":"2024-04-25T19:45:05.559631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df received MsgVoteResp from 52a637c8f882c7df at term 3"}
	{"level":"info","ts":"2024-04-25T19:45:05.559657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"52a637c8f882c7df became leader at term 3"}
	{"level":"info","ts":"2024-04-25T19:45:05.559682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 52a637c8f882c7df elected leader 52a637c8f882c7df at term 3"}
	{"level":"info","ts":"2024-04-25T19:45:05.564563Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"52a637c8f882c7df","local-member-attributes":"{Name:pause-762664 ClientURLs:[https://192.168.61.146:2379]}","request-path":"/0/members/52a637c8f882c7df/attributes","cluster-id":"a63b81a8045c22a0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T19:45:05.56486Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:45:05.565114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T19:45:05.567126Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T19:45:05.567263Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T19:45:05.568767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-25T19:45:05.5715Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.146:2379"}
	{"level":"warn","ts":"2024-04-25T19:45:28.595313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.561822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.146\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-25T19:45:28.595644Z","caller":"traceutil/trace.go:171","msg":"trace[1171848953] range","detail":"{range_begin:/registry/masterleases/192.168.61.146; range_end:; response_count:1; response_revision:448; }","duration":"173.948965ms","start":"2024-04-25T19:45:28.421675Z","end":"2024-04-25T19:45:28.595624Z","steps":["trace[1171848953] 'range keys from in-memory index tree'  (duration: 173.43892ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T19:45:28.839719Z","caller":"traceutil/trace.go:171","msg":"trace[82655130] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"168.731854ms","start":"2024-04-25T19:45:28.670968Z","end":"2024-04-25T19:45:28.839699Z","steps":["trace[82655130] 'process raft request'  (duration: 167.862018ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:45:32 up 2 min,  0 users,  load average: 0.95, 0.37, 0.13
	Linux pause-762664 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [52367164dab8a899fc7b1608d061d78e63e5b1be7e41d4dcf9ccb6bde2f27bf7] <==
	I0425 19:45:07.334831       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0425 19:45:07.335356       1 aggregator.go:165] initial CRD sync complete...
	I0425 19:45:07.335439       1 autoregister_controller.go:141] Starting autoregister controller
	I0425 19:45:07.335466       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0425 19:45:07.396534       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0425 19:45:07.396668       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0425 19:45:07.396811       1 shared_informer.go:320] Caches are synced for configmaps
	I0425 19:45:07.397358       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0425 19:45:07.397925       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0425 19:45:07.398426       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0425 19:45:07.409818       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0425 19:45:07.409875       1 policy_source.go:224] refreshing policies
	I0425 19:45:07.409934       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0425 19:45:07.411809       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0425 19:45:07.412452       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0425 19:45:07.430094       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0425 19:45:07.436871       1 cache.go:39] Caches are synced for autoregister controller
	I0425 19:45:08.181867       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0425 19:45:08.990599       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0425 19:45:09.016577       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0425 19:45:09.068637       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0425 19:45:09.097806       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0425 19:45:09.108426       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0425 19:45:20.830655       1 controller.go:615] quota admission added evaluator for: endpoints
	I0425 19:45:20.882894       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb] <==
	I0425 19:44:51.120279       1 options.go:221] external host was not specified, using 192.168.61.146
	I0425 19:44:51.123495       1 server.go:148] Version: v1.30.0
	I0425 19:44:51.123535       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0425 19:44:51.944196       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:51.944325       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0425 19:44:51.944549       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0425 19:44:51.948803       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0425 19:44:51.950304       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0425 19:44:51.950495       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0425 19:44:51.950700       1 instance.go:299] Using reconciler: lease
	W0425 19:44:51.952146       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:52.945216       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:52.945297       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:52.952608       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:54.353672       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:54.448960       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:54.663499       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:56.470975       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:56.728910       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:44:56.749471       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:45:00.131691       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:45:00.743031       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0425 19:45:01.584494       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [85f0f3551bac69e811415cfa0d495ebdc5b9f49409cae7becb932794f20a7f7e] <==
	I0425 19:45:20.581937       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0425 19:45:20.587712       1 shared_informer.go:320] Caches are synced for persistent volume
	I0425 19:45:20.590269       1 shared_informer.go:320] Caches are synced for endpoint
	I0425 19:45:20.593317       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0425 19:45:20.628378       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0425 19:45:20.628517       1 shared_informer.go:320] Caches are synced for GC
	I0425 19:45:20.631227       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0425 19:45:20.637341       1 shared_informer.go:320] Caches are synced for node
	I0425 19:45:20.637753       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0425 19:45:20.639408       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0425 19:45:20.639979       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0425 19:45:20.640122       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0425 19:45:20.684179       1 shared_informer.go:320] Caches are synced for taint
	I0425 19:45:20.684323       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0425 19:45:20.684413       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-762664"
	I0425 19:45:20.684459       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0425 19:45:20.727803       1 shared_informer.go:320] Caches are synced for attach detach
	I0425 19:45:20.739517       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0425 19:45:20.808423       1 shared_informer.go:320] Caches are synced for resource quota
	I0425 19:45:20.820525       1 shared_informer.go:320] Caches are synced for job
	I0425 19:45:20.827578       1 shared_informer.go:320] Caches are synced for cronjob
	I0425 19:45:20.834440       1 shared_informer.go:320] Caches are synced for resource quota
	I0425 19:45:21.218929       1 shared_informer.go:320] Caches are synced for garbage collector
	I0425 19:45:21.243611       1 shared_informer.go:320] Caches are synced for garbage collector
	I0425 19:45:21.243796       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e] <==
	
	
	==> kube-proxy [8f44775b3f5b755a6fc92c9d80b0ac7b2c13447e86f82b5d5b63dd1758cb6d06] <==
	I0425 19:44:55.284871       1 server_linux.go:69] "Using iptables proxy"
	E0425 19:45:02.640353       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-762664\": dial tcp 192.168.61.146:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.146:34644->192.168.61.146:8443: read: connection reset by peer"
	E0425 19:45:03.772011       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-762664\": dial tcp 192.168.61.146:8443: connect: connection refused"
	I0425 19:45:07.362391       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.146"]
	I0425 19:45:07.447405       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:45:07.447434       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:45:07.447449       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:45:07.454893       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:45:07.455329       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:45:07.455493       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:45:07.456793       1 config.go:192] "Starting service config controller"
	I0425 19:45:07.456881       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:45:07.456921       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:45:07.456938       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:45:07.457452       1 config.go:319] "Starting node config controller"
	I0425 19:45:07.459127       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:45:07.557348       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:45:07.557367       1 shared_informer.go:320] Caches are synced for service config
	I0425 19:45:07.559752       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e6076e0ade5f404ea80f81f0feb30aea60f1f7da3db669cfc1e287fc7b7562e4] <==
	I0425 19:44:13.665485       1 server_linux.go:69] "Using iptables proxy"
	I0425 19:44:13.890737       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.146"]
	I0425 19:44:14.084802       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 19:44:14.084831       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 19:44:14.084849       1 server_linux.go:165] "Using iptables Proxier"
	I0425 19:44:14.088405       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 19:44:14.088631       1 server.go:872] "Version info" version="v1.30.0"
	I0425 19:44:14.088879       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 19:44:14.090031       1 config.go:192] "Starting service config controller"
	I0425 19:44:14.090209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 19:44:14.090258       1 config.go:101] "Starting endpoint slice config controller"
	I0425 19:44:14.090276       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 19:44:14.090840       1 config.go:319] "Starting node config controller"
	I0425 19:44:14.090875       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 19:44:14.191277       1 shared_informer.go:320] Caches are synced for service config
	I0425 19:44:14.191356       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 19:44:14.191885       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1abe9ce14d8d9f3297db7653b706eb7076620562e28b6bb05be8be780daf4ca7] <==
	W0425 19:45:07.298261       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 19:45:07.298315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 19:45:07.298367       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 19:45:07.298407       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 19:45:07.298452       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0425 19:45:07.298465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0425 19:45:07.298523       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0425 19:45:07.298575       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0425 19:45:07.298629       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 19:45:07.298678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0425 19:45:07.298731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 19:45:07.298781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 19:45:07.299031       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 19:45:07.308236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 19:45:07.308372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 19:45:07.308416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 19:45:07.308473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 19:45:07.308522       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 19:45:07.308586       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:45:07.308628       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0425 19:45:07.308706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0425 19:45:07.308747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0425 19:45:07.308818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 19:45:07.308867       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0425 19:45:07.451163       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [531f413370c155806a7b3732a11dba0bf44754da55580c8956b0b7b83cc522ab] <==
	E0425 19:43:56.468013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0425 19:43:56.468095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 19:43:56.468130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 19:43:56.468321       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 19:43:56.468367       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 19:43:57.287389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0425 19:43:57.287466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0425 19:43:57.322398       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 19:43:57.322480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 19:43:57.355653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0425 19:43:57.355880       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 19:43:57.484400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0425 19:43:57.484480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0425 19:43:57.579795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 19:43:57.579891       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 19:43:57.607501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0425 19:43:57.607671       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0425 19:43:57.641301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 19:43:57.641512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 19:43:57.644788       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0425 19:43:57.644926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0425 19:43:57.878525       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 19:43:57.878603       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0425 19:44:01.066986       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0425 19:44:35.942637       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.845600    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36ae50117f119bc1f2822a38375444e0-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-762664\" (UID: \"36ae50117f119bc1f2822a38375444e0\") " pod="kube-system/kube-controller-manager-pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.845614    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d576325ea51f34aa54f82e656b7d0c4b-ca-certs\") pod \"kube-apiserver-pause-762664\" (UID: \"d576325ea51f34aa54f82e656b7d0c4b\") " pod="kube-system/kube-apiserver-pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.845658    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d576325ea51f34aa54f82e656b7d0c4b-usr-share-ca-certificates\") pod \"kube-apiserver-pause-762664\" (UID: \"d576325ea51f34aa54f82e656b7d0c4b\") " pod="kube-system/kube-apiserver-pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.845672    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36ae50117f119bc1f2822a38375444e0-k8s-certs\") pod \"kube-controller-manager-pause-762664\" (UID: \"36ae50117f119bc1f2822a38375444e0\") " pod="kube-system/kube-controller-manager-pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.845691    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36ae50117f119bc1f2822a38375444e0-kubeconfig\") pod \"kube-controller-manager-pause-762664\" (UID: \"36ae50117f119bc1f2822a38375444e0\") " pod="kube-system/kube-controller-manager-pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: I0425 19:45:03.947372    3431 kubelet_node_status.go:73] "Attempting to register node" node="pause-762664"
	Apr 25 19:45:03 pause-762664 kubelet[3431]: E0425 19:45:03.949600    3431 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.146:8443: connect: connection refused" node="pause-762664"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: I0425 19:45:04.114693    3431 scope.go:117] "RemoveContainer" containerID="15ab66a9252269a15fc908f64cbff7da526692a51d1e24c1b7b8239eac0f811c"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: I0425 19:45:04.116722    3431 scope.go:117] "RemoveContainer" containerID="537c5ceb06ae4c85b2d7fbf8a18c8e538bffbef32f09ef7b94181544cc8501cb"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: I0425 19:45:04.117749    3431 scope.go:117] "RemoveContainer" containerID="ea8fe2b8ac69510ec66307f7411db16a574155caeac8a2aef3cc9d29db24ae9e"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: E0425 19:45:04.242243    3431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-762664?timeout=10s\": dial tcp 192.168.61.146:8443: connect: connection refused" interval="800ms"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: I0425 19:45:04.351874    3431 kubelet_node_status.go:73] "Attempting to register node" node="pause-762664"
	Apr 25 19:45:04 pause-762664 kubelet[3431]: E0425 19:45:04.353272    3431 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.146:8443: connect: connection refused" node="pause-762664"
	Apr 25 19:45:05 pause-762664 kubelet[3431]: I0425 19:45:05.155639    3431 kubelet_node_status.go:73] "Attempting to register node" node="pause-762664"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.509517    3431 kubelet_node_status.go:112] "Node was previously registered" node="pause-762664"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.509663    3431 kubelet_node_status.go:76] "Successfully registered node" node="pause-762664"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.511649    3431 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.512694    3431 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.617701    3431 apiserver.go:52] "Watching apiserver"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.621224    3431 topology_manager.go:215] "Topology Admit Handler" podUID="d9d92885-9821-488c-bb93-a4a35d60fb1a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g4zcp"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.621382    3431 topology_manager.go:215] "Topology Admit Handler" podUID="e764791e-c170-49f4-b844-668b59f31072" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x667t"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.621489    3431 topology_manager.go:215] "Topology Admit Handler" podUID="3bb81443-7890-4887-9031-5a05eba9d67d" podNamespace="kube-system" podName="kube-proxy-j2lhr"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.632390    3431 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.698774    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bb81443-7890-4887-9031-5a05eba9d67d-lib-modules\") pod \"kube-proxy-j2lhr\" (UID: \"3bb81443-7890-4887-9031-5a05eba9d67d\") " pod="kube-system/kube-proxy-j2lhr"
	Apr 25 19:45:07 pause-762664 kubelet[3431]: I0425 19:45:07.698924    3431 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bb81443-7890-4887-9031-5a05eba9d67d-xtables-lock\") pod \"kube-proxy-j2lhr\" (UID: \"3bb81443-7890-4887-9031-5a05eba9d67d\") " pod="kube-system/kube-proxy-j2lhr"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-762664 -n pause-762664
helpers_test.go:261: (dbg) Run:  kubectl --context pause-762664 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (78.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (315.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-210442 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-210442 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m14.896560645s)

                                                
                                                
-- stdout --
	* [old-k8s-version-210442] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18757
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-210442" primary control-plane node in "old-k8s-version-210442" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 19:52:17.989352   64931 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:52:17.989591   64931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:52:17.989602   64931 out.go:304] Setting ErrFile to fd 2...
	I0425 19:52:17.989607   64931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:52:17.989795   64931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:52:17.990452   64931 out.go:298] Setting JSON to false
	I0425 19:52:17.991530   64931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5684,"bootTime":1714069054,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:52:17.991592   64931 start.go:139] virtualization: kvm guest
	I0425 19:52:17.993951   64931 out.go:177] * [old-k8s-version-210442] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:52:17.995653   64931 notify.go:220] Checking for updates...
	I0425 19:52:17.995669   64931 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:52:17.997258   64931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:52:17.998850   64931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:52:18.000348   64931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:52:18.001650   64931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:52:18.003019   64931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:52:18.004756   64931 config.go:182] Loaded profile config "bridge-120641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:52:18.004859   64931 config.go:182] Loaded profile config "enable-default-cni-120641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:52:18.004953   64931 config.go:182] Loaded profile config "flannel-120641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:52:18.005063   64931 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:52:18.043970   64931 out.go:177] * Using the kvm2 driver based on user configuration
	I0425 19:52:18.045364   64931 start.go:297] selected driver: kvm2
	I0425 19:52:18.045385   64931 start.go:901] validating driver "kvm2" against <nil>
	I0425 19:52:18.045401   64931 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:52:18.046154   64931 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:52:18.046272   64931 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:52:18.061583   64931 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:52:18.061640   64931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 19:52:18.061866   64931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:52:18.061930   64931 cni.go:84] Creating CNI manager for ""
	I0425 19:52:18.061939   64931 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:52:18.061947   64931 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 19:52:18.062007   64931 start.go:340] cluster config:
	{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:52:18.062104   64931 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:52:18.064094   64931 out.go:177] * Starting "old-k8s-version-210442" primary control-plane node in "old-k8s-version-210442" cluster
	I0425 19:52:18.065486   64931 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 19:52:18.065525   64931 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:52:18.065535   64931 cache.go:56] Caching tarball of preloaded images
	I0425 19:52:18.065635   64931 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:52:18.065647   64931 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0425 19:52:18.065729   64931 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 19:52:18.065746   64931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json: {Name:mkfc6ffebfebfda25d6b4385b21ecc7aabe449bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:52:18.065872   64931 start.go:360] acquireMachinesLock for old-k8s-version-210442: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:52:48.692279   64931 start.go:364] duration metric: took 30.626346774s to acquireMachinesLock for "old-k8s-version-210442"
	I0425 19:52:48.692359   64931 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 19:52:48.692525   64931 start.go:125] createHost starting for "" (driver="kvm2")
	I0425 19:52:48.694550   64931 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0425 19:52:48.694754   64931 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:52:48.694803   64931 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:52:48.715110   64931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I0425 19:52:48.715657   64931 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:52:48.716293   64931 main.go:141] libmachine: Using API Version  1
	I0425 19:52:48.716318   64931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:52:48.716665   64931 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:52:48.716870   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 19:52:48.717016   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:52:48.717177   64931 start.go:159] libmachine.API.Create for "old-k8s-version-210442" (driver="kvm2")
	I0425 19:52:48.717204   64931 client.go:168] LocalClient.Create starting
	I0425 19:52:48.717235   64931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 19:52:48.717274   64931 main.go:141] libmachine: Decoding PEM data...
	I0425 19:52:48.717300   64931 main.go:141] libmachine: Parsing certificate...
	I0425 19:52:48.717374   64931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 19:52:48.717402   64931 main.go:141] libmachine: Decoding PEM data...
	I0425 19:52:48.717418   64931 main.go:141] libmachine: Parsing certificate...
	I0425 19:52:48.717445   64931 main.go:141] libmachine: Running pre-create checks...
	I0425 19:52:48.717461   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .PreCreateCheck
	I0425 19:52:48.717879   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetConfigRaw
	I0425 19:52:48.718355   64931 main.go:141] libmachine: Creating machine...
	I0425 19:52:48.718373   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .Create
	I0425 19:52:48.718498   64931 main.go:141] libmachine: (old-k8s-version-210442) Creating KVM machine...
	I0425 19:52:48.719848   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found existing default KVM network
	I0425 19:52:48.721412   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:48.721240   65243 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:03:55:91} reservation:<nil>}
	I0425 19:52:48.722735   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:48.722635   65243 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:67:26:a2} reservation:<nil>}
	I0425 19:52:48.724109   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:48.724010   65243 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000308a60}
	I0425 19:52:48.724138   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | created network xml: 
	I0425 19:52:48.724155   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | <network>
	I0425 19:52:48.724167   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG |   <name>mk-old-k8s-version-210442</name>
	I0425 19:52:48.724181   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG |   <dns enable='no'/>
	I0425 19:52:48.724194   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG |   
	I0425 19:52:48.724207   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0425 19:52:48.724228   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG |     <dhcp>
	I0425 19:52:48.724243   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0425 19:52:48.724254   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG |     </dhcp>
	I0425 19:52:48.724262   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG |   </ip>
	I0425 19:52:48.724278   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG |   
	I0425 19:52:48.724293   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | </network>
	I0425 19:52:48.724321   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | 
	I0425 19:52:48.730393   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | trying to create private KVM network mk-old-k8s-version-210442 192.168.61.0/24...
	I0425 19:52:48.812181   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | private KVM network mk-old-k8s-version-210442 192.168.61.0/24 created
	I0425 19:52:48.812211   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:48.812136   65243 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:52:48.812239   64931 main.go:141] libmachine: (old-k8s-version-210442) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442 ...
	I0425 19:52:48.812260   64931 main.go:141] libmachine: (old-k8s-version-210442) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 19:52:48.812274   64931 main.go:141] libmachine: (old-k8s-version-210442) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 19:52:49.066604   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:49.066419   65243 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa...
	I0425 19:52:49.286183   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:49.286034   65243 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/old-k8s-version-210442.rawdisk...
	I0425 19:52:49.286226   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Writing magic tar header
	I0425 19:52:49.286247   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Writing SSH key tar header
	I0425 19:52:49.286259   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:49.286227   65243 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442 ...
	I0425 19:52:49.286357   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442
	I0425 19:52:49.286395   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 19:52:49.286408   64931 main.go:141] libmachine: (old-k8s-version-210442) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442 (perms=drwx------)
	I0425 19:52:49.286426   64931 main.go:141] libmachine: (old-k8s-version-210442) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 19:52:49.286449   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:52:49.286461   64931 main.go:141] libmachine: (old-k8s-version-210442) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 19:52:49.286472   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 19:52:49.286485   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 19:52:49.286497   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Checking permissions on dir: /home/jenkins
	I0425 19:52:49.286508   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Checking permissions on dir: /home
	I0425 19:52:49.286516   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Skipping /home - not owner
	I0425 19:52:49.286530   64931 main.go:141] libmachine: (old-k8s-version-210442) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 19:52:49.286540   64931 main.go:141] libmachine: (old-k8s-version-210442) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 19:52:49.286551   64931 main.go:141] libmachine: (old-k8s-version-210442) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 19:52:49.286558   64931 main.go:141] libmachine: (old-k8s-version-210442) Creating domain...
	I0425 19:52:49.287734   64931 main.go:141] libmachine: (old-k8s-version-210442) define libvirt domain using xml: 
	I0425 19:52:49.287771   64931 main.go:141] libmachine: (old-k8s-version-210442) <domain type='kvm'>
	I0425 19:52:49.287783   64931 main.go:141] libmachine: (old-k8s-version-210442)   <name>old-k8s-version-210442</name>
	I0425 19:52:49.287796   64931 main.go:141] libmachine: (old-k8s-version-210442)   <memory unit='MiB'>2200</memory>
	I0425 19:52:49.287807   64931 main.go:141] libmachine: (old-k8s-version-210442)   <vcpu>2</vcpu>
	I0425 19:52:49.287819   64931 main.go:141] libmachine: (old-k8s-version-210442)   <features>
	I0425 19:52:49.287831   64931 main.go:141] libmachine: (old-k8s-version-210442)     <acpi/>
	I0425 19:52:49.287850   64931 main.go:141] libmachine: (old-k8s-version-210442)     <apic/>
	I0425 19:52:49.287873   64931 main.go:141] libmachine: (old-k8s-version-210442)     <pae/>
	I0425 19:52:49.287883   64931 main.go:141] libmachine: (old-k8s-version-210442)     
	I0425 19:52:49.287893   64931 main.go:141] libmachine: (old-k8s-version-210442)   </features>
	I0425 19:52:49.287901   64931 main.go:141] libmachine: (old-k8s-version-210442)   <cpu mode='host-passthrough'>
	I0425 19:52:49.287914   64931 main.go:141] libmachine: (old-k8s-version-210442)   
	I0425 19:52:49.287921   64931 main.go:141] libmachine: (old-k8s-version-210442)   </cpu>
	I0425 19:52:49.287934   64931 main.go:141] libmachine: (old-k8s-version-210442)   <os>
	I0425 19:52:49.287946   64931 main.go:141] libmachine: (old-k8s-version-210442)     <type>hvm</type>
	I0425 19:52:49.287959   64931 main.go:141] libmachine: (old-k8s-version-210442)     <boot dev='cdrom'/>
	I0425 19:52:49.287975   64931 main.go:141] libmachine: (old-k8s-version-210442)     <boot dev='hd'/>
	I0425 19:52:49.287990   64931 main.go:141] libmachine: (old-k8s-version-210442)     <bootmenu enable='no'/>
	I0425 19:52:49.288001   64931 main.go:141] libmachine: (old-k8s-version-210442)   </os>
	I0425 19:52:49.288010   64931 main.go:141] libmachine: (old-k8s-version-210442)   <devices>
	I0425 19:52:49.288022   64931 main.go:141] libmachine: (old-k8s-version-210442)     <disk type='file' device='cdrom'>
	I0425 19:52:49.288037   64931 main.go:141] libmachine: (old-k8s-version-210442)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/boot2docker.iso'/>
	I0425 19:52:49.288054   64931 main.go:141] libmachine: (old-k8s-version-210442)       <target dev='hdc' bus='scsi'/>
	I0425 19:52:49.288067   64931 main.go:141] libmachine: (old-k8s-version-210442)       <readonly/>
	I0425 19:52:49.288078   64931 main.go:141] libmachine: (old-k8s-version-210442)     </disk>
	I0425 19:52:49.288092   64931 main.go:141] libmachine: (old-k8s-version-210442)     <disk type='file' device='disk'>
	I0425 19:52:49.288105   64931 main.go:141] libmachine: (old-k8s-version-210442)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 19:52:49.288153   64931 main.go:141] libmachine: (old-k8s-version-210442)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/old-k8s-version-210442.rawdisk'/>
	I0425 19:52:49.288188   64931 main.go:141] libmachine: (old-k8s-version-210442)       <target dev='hda' bus='virtio'/>
	I0425 19:52:49.288198   64931 main.go:141] libmachine: (old-k8s-version-210442)     </disk>
	I0425 19:52:49.288210   64931 main.go:141] libmachine: (old-k8s-version-210442)     <interface type='network'>
	I0425 19:52:49.288226   64931 main.go:141] libmachine: (old-k8s-version-210442)       <source network='mk-old-k8s-version-210442'/>
	I0425 19:52:49.288237   64931 main.go:141] libmachine: (old-k8s-version-210442)       <model type='virtio'/>
	I0425 19:52:49.288243   64931 main.go:141] libmachine: (old-k8s-version-210442)     </interface>
	I0425 19:52:49.288248   64931 main.go:141] libmachine: (old-k8s-version-210442)     <interface type='network'>
	I0425 19:52:49.288256   64931 main.go:141] libmachine: (old-k8s-version-210442)       <source network='default'/>
	I0425 19:52:49.288267   64931 main.go:141] libmachine: (old-k8s-version-210442)       <model type='virtio'/>
	I0425 19:52:49.288280   64931 main.go:141] libmachine: (old-k8s-version-210442)     </interface>
	I0425 19:52:49.288296   64931 main.go:141] libmachine: (old-k8s-version-210442)     <serial type='pty'>
	I0425 19:52:49.288309   64931 main.go:141] libmachine: (old-k8s-version-210442)       <target port='0'/>
	I0425 19:52:49.288318   64931 main.go:141] libmachine: (old-k8s-version-210442)     </serial>
	I0425 19:52:49.288324   64931 main.go:141] libmachine: (old-k8s-version-210442)     <console type='pty'>
	I0425 19:52:49.288332   64931 main.go:141] libmachine: (old-k8s-version-210442)       <target type='serial' port='0'/>
	I0425 19:52:49.288338   64931 main.go:141] libmachine: (old-k8s-version-210442)     </console>
	I0425 19:52:49.288348   64931 main.go:141] libmachine: (old-k8s-version-210442)     <rng model='virtio'>
	I0425 19:52:49.288358   64931 main.go:141] libmachine: (old-k8s-version-210442)       <backend model='random'>/dev/random</backend>
	I0425 19:52:49.288372   64931 main.go:141] libmachine: (old-k8s-version-210442)     </rng>
	I0425 19:52:49.288384   64931 main.go:141] libmachine: (old-k8s-version-210442)     
	I0425 19:52:49.288394   64931 main.go:141] libmachine: (old-k8s-version-210442)     
	I0425 19:52:49.288399   64931 main.go:141] libmachine: (old-k8s-version-210442)   </devices>
	I0425 19:52:49.288404   64931 main.go:141] libmachine: (old-k8s-version-210442) </domain>
	I0425 19:52:49.288411   64931 main.go:141] libmachine: (old-k8s-version-210442) 
	I0425 19:52:49.296970   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:0d:e3:3b in network default
	I0425 19:52:49.297779   64931 main.go:141] libmachine: (old-k8s-version-210442) Ensuring networks are active...
	I0425 19:52:49.297813   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:49.298859   64931 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network default is active
	I0425 19:52:49.299315   64931 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network mk-old-k8s-version-210442 is active
	I0425 19:52:49.299956   64931 main.go:141] libmachine: (old-k8s-version-210442) Getting domain xml...
	I0425 19:52:49.301106   64931 main.go:141] libmachine: (old-k8s-version-210442) Creating domain...
	I0425 19:52:50.692581   64931 main.go:141] libmachine: (old-k8s-version-210442) Waiting to get IP...
	I0425 19:52:50.693596   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:50.694278   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:52:50.694317   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:50.694250   65243 retry.go:31] will retry after 225.370935ms: waiting for machine to come up
	I0425 19:52:50.921825   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:50.922438   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:52:50.922486   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:50.922400   65243 retry.go:31] will retry after 265.865279ms: waiting for machine to come up
	I0425 19:52:51.190069   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:51.190751   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:52:51.190768   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:51.190673   65243 retry.go:31] will retry after 443.343372ms: waiting for machine to come up
	I0425 19:52:51.635471   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:51.636083   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:52:51.636119   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:51.636070   65243 retry.go:31] will retry after 486.383692ms: waiting for machine to come up
	I0425 19:52:52.123706   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:52.124385   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:52:52.124412   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:52.124313   65243 retry.go:31] will retry after 594.262346ms: waiting for machine to come up
	I0425 19:52:52.720268   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:52.720837   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:52:52.720862   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:52.720781   65243 retry.go:31] will retry after 719.457512ms: waiting for machine to come up
	I0425 19:52:53.441760   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:53.442298   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:52:53.442329   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:53.442273   65243 retry.go:31] will retry after 907.510876ms: waiting for machine to come up
	I0425 19:52:54.352181   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:54.352720   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:52:54.352750   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:54.352675   65243 retry.go:31] will retry after 1.039417045s: waiting for machine to come up
	I0425 19:52:55.394023   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:55.394625   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:52:55.394646   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:55.394573   65243 retry.go:31] will retry after 1.341383594s: waiting for machine to come up
	I0425 19:52:56.738247   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:56.738871   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:52:56.738894   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:56.738788   65243 retry.go:31] will retry after 1.78654144s: waiting for machine to come up
	I0425 19:52:58.527566   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:52:58.528385   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:52:58.528414   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:52:58.528309   65243 retry.go:31] will retry after 1.752071295s: waiting for machine to come up
	I0425 19:53:00.281741   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:00.282293   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:53:00.282320   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:53:00.282237   65243 retry.go:31] will retry after 2.907902346s: waiting for machine to come up
	I0425 19:53:03.192100   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:03.192618   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:53:03.192643   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:53:03.192567   65243 retry.go:31] will retry after 3.274998775s: waiting for machine to come up
	I0425 19:53:06.468718   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:06.469253   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:53:06.469307   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:53:06.469201   65243 retry.go:31] will retry after 4.254472154s: waiting for machine to come up
	I0425 19:53:10.726097   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:10.726659   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 19:53:10.726690   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 19:53:10.726600   65243 retry.go:31] will retry after 6.496652794s: waiting for machine to come up
	I0425 19:53:17.227920   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:17.228560   64931 main.go:141] libmachine: (old-k8s-version-210442) Found IP for machine: 192.168.61.136
	I0425 19:53:17.228588   64931 main.go:141] libmachine: (old-k8s-version-210442) Reserving static IP address...
	I0425 19:53:17.228602   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has current primary IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:17.228966   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"} in network mk-old-k8s-version-210442
	I0425 19:53:17.317629   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Getting to WaitForSSH function...
	I0425 19:53:17.317661   64931 main.go:141] libmachine: (old-k8s-version-210442) Reserved static IP address: 192.168.61.136
	I0425 19:53:17.317674   64931 main.go:141] libmachine: (old-k8s-version-210442) Waiting for SSH to be available...
	I0425 19:53:17.324024   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:17.324398   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442
	I0425 19:53:17.324423   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find defined IP address of network mk-old-k8s-version-210442 interface with MAC address 52:54:00:11:0b:ca
	I0425 19:53:17.324601   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH client type: external
	I0425 19:53:17.324624   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa (-rw-------)
	I0425 19:53:17.324677   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 19:53:17.324691   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | About to run SSH command:
	I0425 19:53:17.324705   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | exit 0
	I0425 19:53:17.329006   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | SSH cmd err, output: exit status 255: 
	I0425 19:53:17.329041   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0425 19:53:17.329063   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | command : exit 0
	I0425 19:53:17.329075   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | err     : exit status 255
	I0425 19:53:17.329088   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | output  : 
	I0425 19:53:20.329321   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Getting to WaitForSSH function...
	I0425 19:53:20.332151   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:20.332632   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:20.332677   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:20.332786   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH client type: external
	I0425 19:53:20.332816   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa (-rw-------)
	I0425 19:53:20.332863   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 19:53:20.332880   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | About to run SSH command:
	I0425 19:53:20.332912   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | exit 0
	I0425 19:53:20.470884   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | SSH cmd err, output: <nil>: 
	I0425 19:53:20.471538   64931 main.go:141] libmachine: (old-k8s-version-210442) KVM machine creation complete!
	I0425 19:53:20.471615   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetConfigRaw
	I0425 19:53:20.472166   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:53:20.472397   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:53:20.472575   64931 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 19:53:20.472607   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetState
	I0425 19:53:20.474045   64931 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 19:53:20.474060   64931 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 19:53:20.474065   64931 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 19:53:20.474071   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 19:53:20.476575   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:20.476936   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:20.476964   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:20.477120   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 19:53:20.477286   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:20.477428   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:20.477551   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 19:53:20.477710   64931 main.go:141] libmachine: Using SSH client type: native
	I0425 19:53:20.477951   64931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 19:53:20.477969   64931 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 19:53:20.605934   64931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:53:20.605958   64931 main.go:141] libmachine: Detecting the provisioner...
	I0425 19:53:20.605968   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 19:53:20.608756   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:20.609182   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:20.609230   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:20.609362   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 19:53:20.609542   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:20.609710   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:20.609904   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 19:53:20.610080   64931 main.go:141] libmachine: Using SSH client type: native
	I0425 19:53:20.610330   64931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 19:53:20.610343   64931 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 19:53:20.733484   64931 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 19:53:20.733562   64931 main.go:141] libmachine: found compatible host: buildroot
	I0425 19:53:20.733576   64931 main.go:141] libmachine: Provisioning with buildroot...
	I0425 19:53:20.733588   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 19:53:20.733895   64931 buildroot.go:166] provisioning hostname "old-k8s-version-210442"
	I0425 19:53:20.733920   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 19:53:20.734110   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 19:53:20.738005   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:20.738391   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:20.738427   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:20.738567   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 19:53:20.738761   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:20.739157   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:20.739532   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 19:53:20.739726   64931 main.go:141] libmachine: Using SSH client type: native
	I0425 19:53:20.739946   64931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 19:53:20.739966   64931 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210442 && echo "old-k8s-version-210442" | sudo tee /etc/hostname
	I0425 19:53:20.887240   64931 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210442
	
	I0425 19:53:20.887270   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 19:53:20.890040   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:20.890467   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:20.890499   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:20.890663   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 19:53:20.890839   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:20.890958   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:20.891136   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 19:53:20.891253   64931 main.go:141] libmachine: Using SSH client type: native
	I0425 19:53:20.891441   64931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 19:53:20.891459   64931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 19:53:21.028373   64931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 19:53:21.028406   64931 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 19:53:21.028431   64931 buildroot.go:174] setting up certificates
	I0425 19:53:21.028442   64931 provision.go:84] configureAuth start
	I0425 19:53:21.028451   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 19:53:21.028760   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 19:53:21.031665   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.032074   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:21.032115   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.032260   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 19:53:21.034829   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.035195   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:21.035224   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.035403   64931 provision.go:143] copyHostCerts
	I0425 19:53:21.035467   64931 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 19:53:21.035477   64931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 19:53:21.035543   64931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 19:53:21.035697   64931 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 19:53:21.035706   64931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 19:53:21.035736   64931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 19:53:21.035823   64931 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 19:53:21.035830   64931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 19:53:21.035857   64931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 19:53:21.035923   64931 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210442 san=[127.0.0.1 192.168.61.136 localhost minikube old-k8s-version-210442]
	I0425 19:53:21.220917   64931 provision.go:177] copyRemoteCerts
	I0425 19:53:21.220969   64931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 19:53:21.220991   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 19:53:21.224214   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.224855   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:21.224885   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.224924   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 19:53:21.225119   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:21.225311   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 19:53:21.225448   64931 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 19:53:21.326079   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 19:53:21.361842   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0425 19:53:21.395359   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 19:53:21.429866   64931 provision.go:87] duration metric: took 401.411591ms to configureAuth
	I0425 19:53:21.429897   64931 buildroot.go:189] setting minikube options for container-runtime
	I0425 19:53:21.430078   64931 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 19:53:21.430163   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 19:53:21.433083   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.433438   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:21.433464   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.433605   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 19:53:21.433777   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:21.433958   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:21.434101   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 19:53:21.434291   64931 main.go:141] libmachine: Using SSH client type: native
	I0425 19:53:21.434515   64931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 19:53:21.434539   64931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 19:53:21.973332   64931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 19:53:21.973357   64931 main.go:141] libmachine: Checking connection to Docker...
	I0425 19:53:21.973365   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetURL
	I0425 19:53:21.974715   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using libvirt version 6000000
	I0425 19:53:21.977700   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.978405   64931 main.go:141] libmachine: Docker is up and running!
	I0425 19:53:21.978430   64931 main.go:141] libmachine: Reticulating splines...
	I0425 19:53:21.978438   64931 client.go:171] duration metric: took 33.261224201s to LocalClient.Create
	I0425 19:53:21.978455   64931 start.go:167] duration metric: took 33.261279179s to libmachine.API.Create "old-k8s-version-210442"
	I0425 19:53:21.978464   64931 start.go:293] postStartSetup for "old-k8s-version-210442" (driver="kvm2")
	I0425 19:53:21.978473   64931 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 19:53:21.978503   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:53:21.978223   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:21.978716   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.978766   64931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 19:53:21.978790   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 19:53:21.983288   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.983691   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:21.983754   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:21.983942   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 19:53:21.984128   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:21.984321   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 19:53:21.984564   64931 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 19:53:22.078894   64931 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 19:53:22.084802   64931 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 19:53:22.084841   64931 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 19:53:22.084916   64931 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 19:53:22.085001   64931 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 19:53:22.085116   64931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 19:53:22.096232   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:53:22.126871   64931 start.go:296] duration metric: took 148.386359ms for postStartSetup
	I0425 19:53:22.126934   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetConfigRaw
	I0425 19:53:22.174371   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 19:53:22.177535   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:22.177928   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:22.177965   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:22.178351   64931 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 19:53:22.239191   64931 start.go:128] duration metric: took 33.546644805s to createHost
	I0425 19:53:22.239242   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 19:53:22.242386   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:22.242737   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:22.242769   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:22.242949   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 19:53:22.243209   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:22.243424   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:22.243586   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 19:53:22.243742   64931 main.go:141] libmachine: Using SSH client type: native
	I0425 19:53:22.243968   64931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 19:53:22.243986   64931 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0425 19:53:22.374747   64931 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714074802.364049295
	
	I0425 19:53:22.374770   64931 fix.go:216] guest clock: 1714074802.364049295
	I0425 19:53:22.374780   64931 fix.go:229] Guest: 2024-04-25 19:53:22.364049295 +0000 UTC Remote: 2024-04-25 19:53:22.239220738 +0000 UTC m=+64.299693658 (delta=124.828557ms)
	I0425 19:53:22.374828   64931 fix.go:200] guest clock delta is within tolerance: 124.828557ms
	I0425 19:53:22.374838   64931 start.go:83] releasing machines lock for "old-k8s-version-210442", held for 33.682515011s
	I0425 19:53:22.374862   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:53:22.375139   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 19:53:22.379083   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:22.379362   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:22.379383   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:22.379691   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:53:22.380720   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:53:22.380932   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:53:22.381014   64931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 19:53:22.381048   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 19:53:22.381373   64931 ssh_runner.go:195] Run: cat /version.json
	I0425 19:53:22.381390   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 19:53:22.384375   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:22.384879   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:22.384902   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:22.385044   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 19:53:22.385162   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:22.385259   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 19:53:22.385364   64931 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 19:53:22.394953   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:22.397309   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:22.397384   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:22.397554   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 19:53:22.397731   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 19:53:22.397892   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 19:53:22.398080   64931 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 19:53:22.489094   64931 ssh_runner.go:195] Run: systemctl --version
	I0425 19:53:22.517994   64931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 19:53:22.700025   64931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 19:53:22.708589   64931 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 19:53:22.708670   64931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 19:53:22.730925   64931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 19:53:22.730948   64931 start.go:494] detecting cgroup driver to use...
	I0425 19:53:22.731003   64931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 19:53:22.754650   64931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 19:53:22.780391   64931 docker.go:217] disabling cri-docker service (if available) ...
	I0425 19:53:22.780476   64931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 19:53:22.800520   64931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 19:53:22.817449   64931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 19:53:22.970919   64931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 19:53:23.147024   64931 docker.go:233] disabling docker service ...
	I0425 19:53:23.147078   64931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 19:53:23.170261   64931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 19:53:23.192848   64931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 19:53:23.374021   64931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 19:53:23.534299   64931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 19:53:23.554824   64931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 19:53:23.585684   64931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0425 19:53:23.585755   64931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:53:23.602325   64931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 19:53:23.602405   64931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:53:23.619086   64931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:53:23.635939   64931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 19:53:23.651537   64931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 19:53:23.668067   64931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 19:53:23.689555   64931 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 19:53:23.689630   64931 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 19:53:23.715320   64931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 19:53:23.727021   64931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:53:23.877482   64931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 19:53:24.067502   64931 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 19:53:24.067586   64931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 19:53:24.074729   64931 start.go:562] Will wait 60s for crictl version
	I0425 19:53:24.074782   64931 ssh_runner.go:195] Run: which crictl
	I0425 19:53:24.080638   64931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 19:53:24.129322   64931 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 19:53:24.129407   64931 ssh_runner.go:195] Run: crio --version
	I0425 19:53:24.164531   64931 ssh_runner.go:195] Run: crio --version
	I0425 19:53:24.211157   64931 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0425 19:53:24.212890   64931 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 19:53:24.216088   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:24.216509   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 19:53:24.216541   64931 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 19:53:24.216742   64931 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 19:53:24.223404   64931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 19:53:24.244051   64931 kubeadm.go:877] updating cluster {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 19:53:24.244171   64931 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 19:53:24.244223   64931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:53:24.293109   64931 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 19:53:24.293178   64931 ssh_runner.go:195] Run: which lz4
	I0425 19:53:24.299155   64931 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0425 19:53:24.305234   64931 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 19:53:24.305272   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0425 19:53:26.582626   64931 crio.go:462] duration metric: took 2.283506669s to copy over tarball
	I0425 19:53:26.582694   64931 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 19:53:30.531770   64931 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.949039869s)
	I0425 19:53:30.531799   64931 crio.go:469] duration metric: took 3.949147243s to extract the tarball
	I0425 19:53:30.531806   64931 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 19:53:30.586086   64931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 19:53:30.646196   64931 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 19:53:30.646253   64931 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 19:53:30.646331   64931 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 19:53:30.646388   64931 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 19:53:30.646651   64931 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0425 19:53:30.646712   64931 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 19:53:30.646737   64931 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 19:53:30.646888   64931 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0425 19:53:30.646986   64931 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0425 19:53:30.646339   64931 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 19:53:30.647623   64931 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 19:53:30.647636   64931 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 19:53:30.647644   64931 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0425 19:53:30.647732   64931 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0425 19:53:30.647758   64931 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 19:53:30.647813   64931 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 19:53:30.647873   64931 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0425 19:53:30.648436   64931 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 19:53:30.788807   64931 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0425 19:53:30.795328   64931 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0425 19:53:30.803516   64931 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0425 19:53:30.804585   64931 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0425 19:53:30.810630   64931 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0425 19:53:30.820520   64931 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0425 19:53:30.860532   64931 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 19:53:30.933806   64931 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0425 19:53:30.933862   64931 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 19:53:30.933920   64931 ssh_runner.go:195] Run: which crictl
	I0425 19:53:31.019592   64931 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0425 19:53:31.019638   64931 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 19:53:31.019687   64931 ssh_runner.go:195] Run: which crictl
	I0425 19:53:31.029705   64931 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0425 19:53:31.029747   64931 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 19:53:31.029812   64931 ssh_runner.go:195] Run: which crictl
	I0425 19:53:31.035895   64931 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0425 19:53:31.035952   64931 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0425 19:53:31.036009   64931 ssh_runner.go:195] Run: which crictl
	I0425 19:53:31.036009   64931 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0425 19:53:31.036039   64931 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0425 19:53:31.036073   64931 ssh_runner.go:195] Run: which crictl
	I0425 19:53:31.036108   64931 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0425 19:53:31.036160   64931 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0425 19:53:31.036237   64931 ssh_runner.go:195] Run: which crictl
	I0425 19:53:31.061626   64931 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0425 19:53:31.061672   64931 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 19:53:31.061718   64931 ssh_runner.go:195] Run: which crictl
	I0425 19:53:31.061722   64931 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0425 19:53:31.061775   64931 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0425 19:53:31.061809   64931 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0425 19:53:31.061837   64931 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0425 19:53:31.061884   64931 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0425 19:53:31.061920   64931 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0425 19:53:31.221696   64931 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 19:53:31.221790   64931 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0425 19:53:31.241455   64931 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0425 19:53:31.241535   64931 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0425 19:53:31.241571   64931 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0425 19:53:31.241630   64931 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0425 19:53:31.241704   64931 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0425 19:53:31.263169   64931 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0425 19:53:31.562186   64931 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 19:53:31.717709   64931 cache_images.go:92] duration metric: took 1.071435503s to LoadCachedImages
	W0425 19:53:31.717875   64931 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0425 19:53:31.717929   64931 kubeadm.go:928] updating node { 192.168.61.136 8443 v1.20.0 crio true true} ...
	I0425 19:53:31.718077   64931 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210442 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 19:53:31.718239   64931 ssh_runner.go:195] Run: crio config
	I0425 19:53:31.790378   64931 cni.go:84] Creating CNI manager for ""
	I0425 19:53:31.790486   64931 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:53:31.790518   64931 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 19:53:31.790570   64931 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210442 NodeName:old-k8s-version-210442 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0425 19:53:31.790794   64931 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210442"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 19:53:31.790921   64931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0425 19:53:31.808599   64931 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 19:53:31.808685   64931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 19:53:31.824250   64931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0425 19:53:31.853018   64931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 19:53:31.886269   64931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0425 19:53:31.922584   64931 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0425 19:53:31.929295   64931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 19:53:31.949757   64931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 19:53:32.134483   64931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 19:53:32.165679   64931 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442 for IP: 192.168.61.136
	I0425 19:53:32.165709   64931 certs.go:194] generating shared ca certs ...
	I0425 19:53:32.165729   64931 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:53:32.165916   64931 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 19:53:32.165984   64931 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 19:53:32.166001   64931 certs.go:256] generating profile certs ...
	I0425 19:53:32.166073   64931 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.key
	I0425 19:53:32.166092   64931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.crt with IP's: []
	I0425 19:53:32.433637   64931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.crt ...
	I0425 19:53:32.433680   64931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.crt: {Name:mkdc68b904dcb5f81ac68e7eba56450f9d50db26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:53:32.433910   64931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.key ...
	I0425 19:53:32.433936   64931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.key: {Name:mkc119ee4b2c16bf6ba695391601b24487c622fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:53:32.434083   64931 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key.1533c9ac
	I0425 19:53:32.434109   64931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt.1533c9ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.136]
	I0425 19:53:32.626791   64931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt.1533c9ac ...
	I0425 19:53:32.626832   64931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt.1533c9ac: {Name:mkfc44346b74462890c6e0557b5528bf865f7b90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:53:32.627038   64931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key.1533c9ac ...
	I0425 19:53:32.627056   64931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key.1533c9ac: {Name:mk35252177723bebf759f89f98270df205f61a01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:53:32.627154   64931 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt.1533c9ac -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt
	I0425 19:53:32.627267   64931 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key.1533c9ac -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key
	I0425 19:53:32.627349   64931 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key
	I0425 19:53:32.627374   64931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.crt with IP's: []
	I0425 19:53:32.821046   64931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.crt ...
	I0425 19:53:32.821087   64931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.crt: {Name:mkfa0ae33bfce5f2eb1f92e8cfbac6e444380012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:53:32.821290   64931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key ...
	I0425 19:53:32.821307   64931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key: {Name:mk3ba98a2fdfb1238a118982d89bb537f121e173 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 19:53:32.821549   64931 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 19:53:32.821612   64931 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 19:53:32.821626   64931 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 19:53:32.821662   64931 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 19:53:32.821698   64931 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 19:53:32.821731   64931 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 19:53:32.821786   64931 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 19:53:32.823067   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 19:53:32.868481   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 19:53:32.908291   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 19:53:32.951840   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 19:53:33.003890   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0425 19:53:33.044110   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 19:53:33.080884   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 19:53:33.114408   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 19:53:33.158524   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 19:53:33.206516   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 19:53:33.246085   64931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 19:53:33.309392   64931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 19:53:33.348015   64931 ssh_runner.go:195] Run: openssl version
	I0425 19:53:33.357011   64931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 19:53:33.374483   64931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 19:53:33.381767   64931 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 19:53:33.381896   64931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 19:53:33.392135   64931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 19:53:33.409677   64931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 19:53:33.424480   64931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:53:33.430135   64931 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:53:33.430193   64931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 19:53:33.436579   64931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 19:53:33.450423   64931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 19:53:33.467041   64931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 19:53:33.473143   64931 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 19:53:33.473191   64931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 19:53:33.481016   64931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 19:53:33.496513   64931 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 19:53:33.501758   64931 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 19:53:33.501830   64931 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:53:33.501935   64931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 19:53:33.501992   64931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 19:53:33.546739   64931 cri.go:89] found id: ""
	I0425 19:53:33.546881   64931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0425 19:53:33.561176   64931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 19:53:33.572915   64931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 19:53:33.589009   64931 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 19:53:33.589038   64931 kubeadm.go:156] found existing configuration files:
	
	I0425 19:53:33.589090   64931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 19:53:33.602697   64931 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 19:53:33.602808   64931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 19:53:33.618278   64931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 19:53:33.633133   64931 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 19:53:33.633193   64931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 19:53:33.647349   64931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 19:53:33.662019   64931 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 19:53:33.662082   64931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 19:53:33.675385   64931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 19:53:33.690011   64931 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 19:53:33.690075   64931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 19:53:33.703405   64931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 19:53:33.885470   64931 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 19:53:33.885652   64931 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 19:53:34.062788   64931 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 19:53:34.062949   64931 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 19:53:34.063114   64931 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 19:53:34.358102   64931 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 19:53:34.359985   64931 out.go:204]   - Generating certificates and keys ...
	I0425 19:53:34.363304   64931 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 19:53:34.363403   64931 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 19:53:34.566392   64931 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0425 19:53:35.131077   64931 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0425 19:53:35.248999   64931 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0425 19:53:35.419406   64931 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0425 19:53:35.598586   64931 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0425 19:53:35.598958   64931 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-210442] and IPs [192.168.61.136 127.0.0.1 ::1]
	I0425 19:53:35.821027   64931 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0425 19:53:35.821236   64931 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-210442] and IPs [192.168.61.136 127.0.0.1 ::1]
	I0425 19:53:36.067133   64931 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0425 19:53:36.203000   64931 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0425 19:53:36.279224   64931 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0425 19:53:36.279335   64931 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 19:53:36.669279   64931 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 19:53:36.803326   64931 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 19:53:37.012152   64931 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 19:53:37.094375   64931 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 19:53:37.112482   64931 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 19:53:37.113825   64931 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 19:53:37.113954   64931 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 19:53:37.290353   64931 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 19:53:37.293594   64931 out.go:204]   - Booting up control plane ...
	I0425 19:53:37.293706   64931 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 19:53:37.307387   64931 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 19:53:37.308975   64931 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 19:53:37.310268   64931 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 19:53:37.315380   64931 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 19:54:17.315209   64931 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 19:54:17.315484   64931 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:54:17.315788   64931 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:54:22.316222   64931 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:54:22.316508   64931 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:54:32.317359   64931 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:54:32.317666   64931 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:54:52.318579   64931 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:54:52.318861   64931 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:55:32.318993   64931 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:55:32.319277   64931 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:55:32.319300   64931 kubeadm.go:309] 
	I0425 19:55:32.319348   64931 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 19:55:32.319399   64931 kubeadm.go:309] 		timed out waiting for the condition
	I0425 19:55:32.319418   64931 kubeadm.go:309] 
	I0425 19:55:32.319464   64931 kubeadm.go:309] 	This error is likely caused by:
	I0425 19:55:32.319510   64931 kubeadm.go:309] 		- The kubelet is not running
	I0425 19:55:32.319646   64931 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 19:55:32.319661   64931 kubeadm.go:309] 
	I0425 19:55:32.319774   64931 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 19:55:32.319805   64931 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 19:55:32.319834   64931 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 19:55:32.319840   64931 kubeadm.go:309] 
	I0425 19:55:32.319931   64931 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 19:55:32.320000   64931 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 19:55:32.320007   64931 kubeadm.go:309] 
	I0425 19:55:32.320089   64931 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 19:55:32.320162   64931 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 19:55:32.320238   64931 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 19:55:32.320316   64931 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 19:55:32.320331   64931 kubeadm.go:309] 
	I0425 19:55:32.321397   64931 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 19:55:32.321514   64931 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 19:55:32.321597   64931 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0425 19:55:32.321780   64931 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-210442] and IPs [192.168.61.136 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-210442] and IPs [192.168.61.136 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-210442] and IPs [192.168.61.136 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-210442] and IPs [192.168.61.136 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0425 19:55:32.321846   64931 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 19:55:35.525280   64931 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.203405812s)
	I0425 19:55:35.525355   64931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 19:55:35.542057   64931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 19:55:35.553425   64931 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 19:55:35.553455   64931 kubeadm.go:156] found existing configuration files:
	
	I0425 19:55:35.553512   64931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 19:55:35.565219   64931 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 19:55:35.565281   64931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 19:55:35.577713   64931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 19:55:35.589108   64931 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 19:55:35.589157   64931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 19:55:35.601159   64931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 19:55:35.611842   64931 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 19:55:35.611906   64931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 19:55:35.623101   64931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 19:55:35.634414   64931 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 19:55:35.634466   64931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 19:55:35.645505   64931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 19:55:35.726131   64931 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 19:55:35.726188   64931 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 19:55:35.883125   64931 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 19:55:35.883236   64931 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 19:55:35.883358   64931 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 19:55:36.130721   64931 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 19:55:36.132213   64931 out.go:204]   - Generating certificates and keys ...
	I0425 19:55:36.132326   64931 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 19:55:36.132583   64931 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 19:55:36.134196   64931 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 19:55:36.134286   64931 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 19:55:36.134362   64931 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 19:55:36.134433   64931 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 19:55:36.135001   64931 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 19:55:36.135527   64931 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 19:55:36.136093   64931 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 19:55:36.136592   64931 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 19:55:36.136853   64931 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 19:55:36.136983   64931 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 19:55:36.333739   64931 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 19:55:36.427372   64931 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 19:55:36.532648   64931 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 19:55:36.951649   64931 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 19:55:36.971195   64931 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 19:55:36.972465   64931 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 19:55:36.972574   64931 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 19:55:37.165124   64931 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 19:55:37.166969   64931 out.go:204]   - Booting up control plane ...
	I0425 19:55:37.167051   64931 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 19:55:37.185410   64931 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 19:55:37.186877   64931 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 19:55:37.188489   64931 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 19:55:37.192335   64931 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 19:56:17.195402   64931 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 19:56:17.195740   64931 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:56:17.196238   64931 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:56:22.197261   64931 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:56:22.197459   64931 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:56:32.198413   64931 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:56:32.198646   64931 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:56:52.199746   64931 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:56:52.199945   64931 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:57:32.198680   64931 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 19:57:32.198925   64931 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 19:57:32.198943   64931 kubeadm.go:309] 
	I0425 19:57:32.199000   64931 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 19:57:32.199053   64931 kubeadm.go:309] 		timed out waiting for the condition
	I0425 19:57:32.199084   64931 kubeadm.go:309] 
	I0425 19:57:32.199140   64931 kubeadm.go:309] 	This error is likely caused by:
	I0425 19:57:32.199186   64931 kubeadm.go:309] 		- The kubelet is not running
	I0425 19:57:32.199334   64931 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 19:57:32.199345   64931 kubeadm.go:309] 
	I0425 19:57:32.199475   64931 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 19:57:32.199533   64931 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 19:57:32.199583   64931 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 19:57:32.199598   64931 kubeadm.go:309] 
	I0425 19:57:32.199748   64931 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 19:57:32.199850   64931 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 19:57:32.199863   64931 kubeadm.go:309] 
	I0425 19:57:32.200023   64931 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 19:57:32.200153   64931 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 19:57:32.200246   64931 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 19:57:32.200346   64931 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 19:57:32.200372   64931 kubeadm.go:309] 
	I0425 19:57:32.201435   64931 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 19:57:32.201530   64931 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 19:57:32.201595   64931 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0425 19:57:32.201647   64931 kubeadm.go:393] duration metric: took 3m58.699822508s to StartCluster
	I0425 19:57:32.201707   64931 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 19:57:32.201761   64931 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 19:57:32.247023   64931 cri.go:89] found id: ""
	I0425 19:57:32.247047   64931 logs.go:276] 0 containers: []
	W0425 19:57:32.247054   64931 logs.go:278] No container was found matching "kube-apiserver"
	I0425 19:57:32.247060   64931 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 19:57:32.247115   64931 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 19:57:32.284910   64931 cri.go:89] found id: ""
	I0425 19:57:32.284939   64931 logs.go:276] 0 containers: []
	W0425 19:57:32.284950   64931 logs.go:278] No container was found matching "etcd"
	I0425 19:57:32.284956   64931 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 19:57:32.285018   64931 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 19:57:32.320326   64931 cri.go:89] found id: ""
	I0425 19:57:32.320351   64931 logs.go:276] 0 containers: []
	W0425 19:57:32.320362   64931 logs.go:278] No container was found matching "coredns"
	I0425 19:57:32.320369   64931 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 19:57:32.320424   64931 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 19:57:32.356503   64931 cri.go:89] found id: ""
	I0425 19:57:32.356526   64931 logs.go:276] 0 containers: []
	W0425 19:57:32.356536   64931 logs.go:278] No container was found matching "kube-scheduler"
	I0425 19:57:32.356544   64931 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 19:57:32.356597   64931 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 19:57:32.391404   64931 cri.go:89] found id: ""
	I0425 19:57:32.391425   64931 logs.go:276] 0 containers: []
	W0425 19:57:32.391432   64931 logs.go:278] No container was found matching "kube-proxy"
	I0425 19:57:32.391437   64931 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 19:57:32.391487   64931 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 19:57:32.428356   64931 cri.go:89] found id: ""
	I0425 19:57:32.428388   64931 logs.go:276] 0 containers: []
	W0425 19:57:32.428396   64931 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 19:57:32.428402   64931 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 19:57:32.428462   64931 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 19:57:32.482693   64931 cri.go:89] found id: ""
	I0425 19:57:32.482721   64931 logs.go:276] 0 containers: []
	W0425 19:57:32.482729   64931 logs.go:278] No container was found matching "kindnet"
	I0425 19:57:32.482737   64931 logs.go:123] Gathering logs for kubelet ...
	I0425 19:57:32.482750   64931 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 19:57:32.535482   64931 logs.go:123] Gathering logs for dmesg ...
	I0425 19:57:32.535515   64931 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 19:57:32.550608   64931 logs.go:123] Gathering logs for describe nodes ...
	I0425 19:57:32.550635   64931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 19:57:32.676486   64931 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 19:57:32.676516   64931 logs.go:123] Gathering logs for CRI-O ...
	I0425 19:57:32.676530   64931 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 19:57:32.775490   64931 logs.go:123] Gathering logs for container status ...
	I0425 19:57:32.775523   64931 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0425 19:57:32.818978   64931 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0425 19:57:32.819024   64931 out.go:239] * 
	* 
	W0425 19:57:32.819076   64931 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 19:57:32.819098   64931 out.go:239] * 
	* 
	W0425 19:57:32.819961   64931 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 19:57:32.823516   64931 out.go:177] 
	W0425 19:57:32.824901   64931 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 19:57:32.824962   64931 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0425 19:57:32.824988   64931 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0425 19:57:32.827107   64931 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-210442 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 6 (243.196202ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:57:33.107829   71650 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-210442" does not appear in /home/jenkins/minikube-integration/18757-6355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-210442" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (315.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-512173 --alsologtostderr -v=3
E0425 19:55:15.160796   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 19:55:15.547419   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:55:17.721153   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 19:55:22.841432   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-512173 --alsologtostderr -v=3: exit status 82 (2m0.54645306s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-512173"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 19:55:14.252461   70878 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:55:14.252558   70878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:55:14.252567   70878 out.go:304] Setting ErrFile to fd 2...
	I0425 19:55:14.252572   70878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:55:14.253225   70878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:55:14.253600   70878 out.go:298] Setting JSON to false
	I0425 19:55:14.253797   70878 mustload.go:65] Loading cluster: embed-certs-512173
	I0425 19:55:14.254582   70878 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:55:14.254676   70878 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/config.json ...
	I0425 19:55:14.254868   70878 mustload.go:65] Loading cluster: embed-certs-512173
	I0425 19:55:14.254967   70878 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:55:14.254990   70878 stop.go:39] StopHost: embed-certs-512173
	I0425 19:55:14.255353   70878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:55:14.255392   70878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:55:14.271396   70878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0425 19:55:14.271834   70878 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:55:14.272383   70878 main.go:141] libmachine: Using API Version  1
	I0425 19:55:14.272406   70878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:55:14.272768   70878 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:55:14.274692   70878 out.go:177] * Stopping node "embed-certs-512173"  ...
	I0425 19:55:14.275857   70878 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0425 19:55:14.275885   70878 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 19:55:14.276134   70878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0425 19:55:14.276166   70878 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 19:55:14.279089   70878 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 19:55:14.279547   70878 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 20:54:15 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 19:55:14.279602   70878 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 19:55:14.279708   70878 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 19:55:14.279873   70878 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 19:55:14.279991   70878 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 19:55:14.280171   70878 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 19:55:14.414788   70878 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0425 19:55:14.492277   70878 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0425 19:55:14.557361   70878 main.go:141] libmachine: Stopping "embed-certs-512173"...
	I0425 19:55:14.557408   70878 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 19:55:14.559435   70878 main.go:141] libmachine: (embed-certs-512173) Calling .Stop
	I0425 19:55:14.563729   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 0/120
	I0425 19:55:15.565452   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 1/120
	I0425 19:55:16.566920   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 2/120
	I0425 19:55:17.568421   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 3/120
	I0425 19:55:18.569875   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 4/120
	I0425 19:55:19.571844   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 5/120
	I0425 19:55:20.574578   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 6/120
	I0425 19:55:21.576033   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 7/120
	I0425 19:55:22.577480   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 8/120
	I0425 19:55:23.578864   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 9/120
	I0425 19:55:24.581309   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 10/120
	I0425 19:55:25.582575   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 11/120
	I0425 19:55:26.584026   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 12/120
	I0425 19:55:27.585940   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 13/120
	I0425 19:55:28.587475   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 14/120
	I0425 19:55:29.589362   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 15/120
	I0425 19:55:30.590910   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 16/120
	I0425 19:55:31.592203   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 17/120
	I0425 19:55:32.594077   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 18/120
	I0425 19:55:33.595617   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 19/120
	I0425 19:55:34.597684   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 20/120
	I0425 19:55:35.599242   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 21/120
	I0425 19:55:36.600879   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 22/120
	I0425 19:55:37.603355   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 23/120
	I0425 19:55:38.604447   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 24/120
	I0425 19:55:39.605900   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 25/120
	I0425 19:55:40.606895   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 26/120
	I0425 19:55:41.608072   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 27/120
	I0425 19:55:42.609134   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 28/120
	I0425 19:55:43.610970   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 29/120
	I0425 19:55:44.612938   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 30/120
	I0425 19:55:45.614252   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 31/120
	I0425 19:55:46.615386   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 32/120
	I0425 19:55:47.616672   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 33/120
	I0425 19:55:48.617700   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 34/120
	I0425 19:55:49.619676   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 35/120
	I0425 19:55:50.620841   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 36/120
	I0425 19:55:51.621826   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 37/120
	I0425 19:55:52.623031   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 38/120
	I0425 19:55:53.624261   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 39/120
	I0425 19:55:54.626158   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 40/120
	I0425 19:55:55.627175   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 41/120
	I0425 19:55:56.628459   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 42/120
	I0425 19:55:57.629363   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 43/120
	I0425 19:55:58.630392   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 44/120
	I0425 19:55:59.632008   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 45/120
	I0425 19:56:00.632929   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 46/120
	I0425 19:56:01.634027   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 47/120
	I0425 19:56:02.635191   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 48/120
	I0425 19:56:03.636240   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 49/120
	I0425 19:56:04.638241   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 50/120
	I0425 19:56:05.639287   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 51/120
	I0425 19:56:06.640482   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 52/120
	I0425 19:56:07.641551   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 53/120
	I0425 19:56:08.642651   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 54/120
	I0425 19:56:09.644698   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 55/120
	I0425 19:56:10.645609   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 56/120
	I0425 19:56:11.646656   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 57/120
	I0425 19:56:12.648304   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 58/120
	I0425 19:56:13.649382   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 59/120
	I0425 19:56:14.651403   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 60/120
	I0425 19:56:15.652480   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 61/120
	I0425 19:56:16.653767   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 62/120
	I0425 19:56:17.655065   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 63/120
	I0425 19:56:18.656469   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 64/120
	I0425 19:56:19.658351   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 65/120
	I0425 19:56:20.660389   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 66/120
	I0425 19:56:21.661482   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 67/120
	I0425 19:56:22.662703   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 68/120
	I0425 19:56:23.664430   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 69/120
	I0425 19:56:24.666276   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 70/120
	I0425 19:56:25.667377   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 71/120
	I0425 19:56:26.669484   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 72/120
	I0425 19:56:27.670508   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 73/120
	I0425 19:56:28.672317   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 74/120
	I0425 19:56:29.674041   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 75/120
	I0425 19:56:30.675072   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 76/120
	I0425 19:56:31.676080   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 77/120
	I0425 19:56:32.677049   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 78/120
	I0425 19:56:33.678110   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 79/120
	I0425 19:56:34.680108   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 80/120
	I0425 19:56:35.681333   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 81/120
	I0425 19:56:36.682618   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 82/120
	I0425 19:56:37.684466   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 83/120
	I0425 19:56:38.685584   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 84/120
	I0425 19:56:39.687405   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 85/120
	I0425 19:56:40.688523   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 86/120
	I0425 19:56:41.689675   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 87/120
	I0425 19:56:42.690661   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 88/120
	I0425 19:56:43.692513   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 89/120
	I0425 19:56:44.694326   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 90/120
	I0425 19:56:45.695225   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 91/120
	I0425 19:56:46.696219   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 92/120
	I0425 19:56:47.697212   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 93/120
	I0425 19:56:48.698244   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 94/120
	I0425 19:56:49.699933   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 95/120
	I0425 19:56:50.700842   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 96/120
	I0425 19:56:51.702343   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 97/120
	I0425 19:56:52.704234   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 98/120
	I0425 19:56:53.705530   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 99/120
	I0425 19:56:54.707294   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 100/120
	I0425 19:56:55.708208   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 101/120
	I0425 19:56:56.709221   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 102/120
	I0425 19:56:57.710527   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 103/120
	I0425 19:56:58.712380   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 104/120
	I0425 19:56:59.713987   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 105/120
	I0425 19:57:00.715228   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 106/120
	I0425 19:57:01.716702   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 107/120
	I0425 19:57:02.718514   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 108/120
	I0425 19:57:03.719956   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 109/120
	I0425 19:57:04.722008   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 110/120
	I0425 19:57:05.723227   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 111/120
	I0425 19:57:06.724606   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 112/120
	I0425 19:57:07.726084   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 113/120
	I0425 19:57:08.727550   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 114/120
	I0425 19:57:09.729493   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 115/120
	I0425 19:57:10.730946   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 116/120
	I0425 19:57:11.732912   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 117/120
	I0425 19:57:12.734405   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 118/120
	I0425 19:57:13.735847   70878 main.go:141] libmachine: (embed-certs-512173) Waiting for machine to stop 119/120
	I0425 19:57:14.737082   70878 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0425 19:57:14.737139   70878 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0425 19:57:14.738985   70878 out.go:177] 
	W0425 19:57:14.740196   70878 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0425 19:57:14.740208   70878 out.go:239] * 
	* 
	W0425 19:57:14.742635   70878 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 19:57:14.743948   70878 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-512173 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512173 -n embed-certs-512173
E0425 19:57:29.410914   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512173 -n embed-certs-512173: exit status 3 (18.645961734s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:57:33.390448   71567 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	E0425 19:57:33.390463   71567 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-512173" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-744552 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-744552 --alsologtostderr -v=3: exit status 82 (2m0.559071314s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-744552"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 19:55:37.288308   71080 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:55:37.288483   71080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:55:37.288500   71080 out.go:304] Setting ErrFile to fd 2...
	I0425 19:55:37.288510   71080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:55:37.288730   71080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:55:37.288979   71080 out.go:298] Setting JSON to false
	I0425 19:55:37.289074   71080 mustload.go:65] Loading cluster: no-preload-744552
	I0425 19:55:37.289421   71080 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:55:37.289507   71080 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/config.json ...
	I0425 19:55:37.289739   71080 mustload.go:65] Loading cluster: no-preload-744552
	I0425 19:55:37.289891   71080 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:55:37.289938   71080 stop.go:39] StopHost: no-preload-744552
	I0425 19:55:37.290886   71080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:55:37.290937   71080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:55:37.306187   71080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0425 19:55:37.306732   71080 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:55:37.307335   71080 main.go:141] libmachine: Using API Version  1
	I0425 19:55:37.307359   71080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:55:37.307780   71080 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:55:37.312803   71080 out.go:177] * Stopping node "no-preload-744552"  ...
	I0425 19:55:37.314037   71080 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0425 19:55:37.314077   71080 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 19:55:37.314347   71080 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0425 19:55:37.314399   71080 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 19:55:37.317502   71080 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 19:55:37.317927   71080 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 20:53:45 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 19:55:37.317973   71080 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 19:55:37.318180   71080 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 19:55:37.318362   71080 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 19:55:37.318560   71080 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 19:55:37.318727   71080 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 19:55:37.443143   71080 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0425 19:55:37.521393   71080 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0425 19:55:37.578847   71080 main.go:141] libmachine: Stopping "no-preload-744552"...
	I0425 19:55:37.578883   71080 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 19:55:37.580439   71080 main.go:141] libmachine: (no-preload-744552) Calling .Stop
	I0425 19:55:37.584234   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 0/120
	I0425 19:55:38.585664   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 1/120
	I0425 19:55:39.587069   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 2/120
	I0425 19:55:40.588311   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 3/120
	I0425 19:55:41.589851   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 4/120
	I0425 19:55:42.591889   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 5/120
	I0425 19:55:43.593376   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 6/120
	I0425 19:55:44.594981   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 7/120
	I0425 19:55:45.596499   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 8/120
	I0425 19:55:46.597819   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 9/120
	I0425 19:55:47.600068   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 10/120
	I0425 19:55:48.601563   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 11/120
	I0425 19:55:49.602776   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 12/120
	I0425 19:55:50.604766   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 13/120
	I0425 19:55:51.606105   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 14/120
	I0425 19:55:52.608261   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 15/120
	I0425 19:55:53.609630   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 16/120
	I0425 19:55:54.611092   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 17/120
	I0425 19:55:55.612726   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 18/120
	I0425 19:55:56.614048   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 19/120
	I0425 19:55:57.615703   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 20/120
	I0425 19:55:58.617145   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 21/120
	I0425 19:55:59.618597   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 22/120
	I0425 19:56:00.619941   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 23/120
	I0425 19:56:01.621262   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 24/120
	I0425 19:56:02.622861   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 25/120
	I0425 19:56:03.624333   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 26/120
	I0425 19:56:04.625670   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 27/120
	I0425 19:56:05.627202   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 28/120
	I0425 19:56:06.628573   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 29/120
	I0425 19:56:07.630391   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 30/120
	I0425 19:56:08.631644   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 31/120
	I0425 19:56:09.633061   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 32/120
	I0425 19:56:10.634391   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 33/120
	I0425 19:56:11.635805   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 34/120
	I0425 19:56:12.637887   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 35/120
	I0425 19:56:13.639256   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 36/120
	I0425 19:56:14.640618   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 37/120
	I0425 19:56:15.641944   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 38/120
	I0425 19:56:16.643431   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 39/120
	I0425 19:56:17.645510   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 40/120
	I0425 19:56:18.646937   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 41/120
	I0425 19:56:19.648377   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 42/120
	I0425 19:56:20.649749   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 43/120
	I0425 19:56:21.651112   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 44/120
	I0425 19:56:22.653021   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 45/120
	I0425 19:56:23.654349   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 46/120
	I0425 19:56:24.656577   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 47/120
	I0425 19:56:25.657976   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 48/120
	I0425 19:56:26.659506   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 49/120
	I0425 19:56:27.661559   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 50/120
	I0425 19:56:28.662881   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 51/120
	I0425 19:56:29.664184   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 52/120
	I0425 19:56:30.665752   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 53/120
	I0425 19:56:31.667109   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 54/120
	I0425 19:56:32.669038   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 55/120
	I0425 19:56:33.670407   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 56/120
	I0425 19:56:34.672833   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 57/120
	I0425 19:56:35.674498   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 58/120
	I0425 19:56:36.675854   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 59/120
	I0425 19:56:37.678134   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 60/120
	I0425 19:56:38.679592   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 61/120
	I0425 19:56:39.681210   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 62/120
	I0425 19:56:40.682730   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 63/120
	I0425 19:56:41.684882   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 64/120
	I0425 19:56:42.686787   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 65/120
	I0425 19:56:43.688044   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 66/120
	I0425 19:56:44.689458   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 67/120
	I0425 19:56:45.691012   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 68/120
	I0425 19:56:46.692383   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 69/120
	I0425 19:56:47.694603   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 70/120
	I0425 19:56:48.695897   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 71/120
	I0425 19:56:49.697276   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 72/120
	I0425 19:56:50.698590   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 73/120
	I0425 19:56:51.700003   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 74/120
	I0425 19:56:52.702147   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 75/120
	I0425 19:56:53.703341   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 76/120
	I0425 19:56:54.704796   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 77/120
	I0425 19:56:55.706228   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 78/120
	I0425 19:56:56.707538   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 79/120
	I0425 19:56:57.709986   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 80/120
	I0425 19:56:58.711422   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 81/120
	I0425 19:56:59.712975   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 82/120
	I0425 19:57:00.714603   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 83/120
	I0425 19:57:01.716383   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 84/120
	I0425 19:57:02.718347   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 85/120
	I0425 19:57:03.719827   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 86/120
	I0425 19:57:04.721365   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 87/120
	I0425 19:57:05.723049   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 88/120
	I0425 19:57:06.724413   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 89/120
	I0425 19:57:07.725776   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 90/120
	I0425 19:57:08.727358   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 91/120
	I0425 19:57:09.728949   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 92/120
	I0425 19:57:10.730590   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 93/120
	I0425 19:57:11.732249   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 94/120
	I0425 19:57:12.734301   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 95/120
	I0425 19:57:13.735670   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 96/120
	I0425 19:57:14.737380   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 97/120
	I0425 19:57:15.738969   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 98/120
	I0425 19:57:16.740579   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 99/120
	I0425 19:57:17.742915   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 100/120
	I0425 19:57:18.744330   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 101/120
	I0425 19:57:19.745773   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 102/120
	I0425 19:57:20.747231   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 103/120
	I0425 19:57:21.748830   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 104/120
	I0425 19:57:22.751052   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 105/120
	I0425 19:57:23.752602   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 106/120
	I0425 19:57:24.754071   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 107/120
	I0425 19:57:25.755575   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 108/120
	I0425 19:57:26.756917   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 109/120
	I0425 19:57:27.759070   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 110/120
	I0425 19:57:28.760381   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 111/120
	I0425 19:57:29.761788   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 112/120
	I0425 19:57:30.763162   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 113/120
	I0425 19:57:31.764520   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 114/120
	I0425 19:57:32.766616   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 115/120
	I0425 19:57:33.768703   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 116/120
	I0425 19:57:34.770186   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 117/120
	I0425 19:57:35.771633   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 118/120
	I0425 19:57:36.772962   71080 main.go:141] libmachine: (no-preload-744552) Waiting for machine to stop 119/120
	I0425 19:57:37.774105   71080 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0425 19:57:37.774156   71080 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0425 19:57:37.776295   71080 out.go:177] 
	W0425 19:57:37.777664   71080 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0425 19:57:37.777682   71080 out.go:239] * 
	* 
	W0425 19:57:37.780363   71080 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 19:57:37.782015   71080 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-744552 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744552 -n no-preload-744552
E0425 19:57:38.908779   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744552 -n no-preload-744552: exit status 3 (18.642750332s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:57:56.426558   71844 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.142:22: connect: no route to host
	E0425 19:57:56.426583   71844 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.142:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-744552" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-142196 --alsologtostderr -v=3
E0425 19:55:45.439391   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 19:55:53.562304   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 19:56:11.710036   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:11.715363   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:11.725637   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:11.745957   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:11.786329   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:11.866689   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:12.027066   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:12.347882   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:12.988892   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:14.269396   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:16.830325   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:16.988503   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:56:21.951225   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:32.191406   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:34.522755   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 19:56:48.448646   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:56:48.453896   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:56:48.464167   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:56:48.484409   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:56:48.524677   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:56:48.605078   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:56:48.765516   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:56:49.086153   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:56:49.727209   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:56:51.008386   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:56:52.671848   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:56:53.568801   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:56:58.689108   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:57:08.930014   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-142196 --alsologtostderr -v=3: exit status 82 (2m0.517255544s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-142196"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 19:55:44.751371   71181 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:55:44.751467   71181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:55:44.751472   71181 out.go:304] Setting ErrFile to fd 2...
	I0425 19:55:44.751477   71181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:55:44.751664   71181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:55:44.751866   71181 out.go:298] Setting JSON to false
	I0425 19:55:44.751939   71181 mustload.go:65] Loading cluster: default-k8s-diff-port-142196
	I0425 19:55:44.752281   71181 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:55:44.752359   71181 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/config.json ...
	I0425 19:55:44.752514   71181 mustload.go:65] Loading cluster: default-k8s-diff-port-142196
	I0425 19:55:44.752608   71181 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:55:44.752629   71181 stop.go:39] StopHost: default-k8s-diff-port-142196
	I0425 19:55:44.753008   71181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:55:44.753047   71181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:55:44.768282   71181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0425 19:55:44.768803   71181 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:55:44.769411   71181 main.go:141] libmachine: Using API Version  1
	I0425 19:55:44.769442   71181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:55:44.769876   71181 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:55:44.772635   71181 out.go:177] * Stopping node "default-k8s-diff-port-142196"  ...
	I0425 19:55:44.774013   71181 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0425 19:55:44.774035   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 19:55:44.774264   71181 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0425 19:55:44.774289   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 19:55:44.777314   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 19:55:44.777735   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 20:54:43 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 19:55:44.777768   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 19:55:44.777991   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 19:55:44.778184   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 19:55:44.778368   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 19:55:44.778508   71181 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 19:55:44.878292   71181 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0425 19:55:44.945123   71181 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0425 19:55:45.014245   71181 main.go:141] libmachine: Stopping "default-k8s-diff-port-142196"...
	I0425 19:55:45.014276   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 19:55:45.015977   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Stop
	I0425 19:55:45.019749   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 0/120
	I0425 19:55:46.021004   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 1/120
	I0425 19:55:47.022326   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 2/120
	I0425 19:55:48.023606   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 3/120
	I0425 19:55:49.024985   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 4/120
	I0425 19:55:50.027013   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 5/120
	I0425 19:55:51.028497   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 6/120
	I0425 19:55:52.029784   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 7/120
	I0425 19:55:53.031267   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 8/120
	I0425 19:55:54.032736   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 9/120
	I0425 19:55:55.034831   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 10/120
	I0425 19:55:56.036123   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 11/120
	I0425 19:55:57.037574   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 12/120
	I0425 19:55:58.038865   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 13/120
	I0425 19:55:59.040237   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 14/120
	I0425 19:56:00.042258   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 15/120
	I0425 19:56:01.043787   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 16/120
	I0425 19:56:02.045178   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 17/120
	I0425 19:56:03.046624   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 18/120
	I0425 19:56:04.047907   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 19/120
	I0425 19:56:05.050227   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 20/120
	I0425 19:56:06.051583   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 21/120
	I0425 19:56:07.053025   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 22/120
	I0425 19:56:08.054516   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 23/120
	I0425 19:56:09.055744   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 24/120
	I0425 19:56:10.057844   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 25/120
	I0425 19:56:11.059139   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 26/120
	I0425 19:56:12.060343   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 27/120
	I0425 19:56:13.061707   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 28/120
	I0425 19:56:14.062933   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 29/120
	I0425 19:56:15.065158   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 30/120
	I0425 19:56:16.066540   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 31/120
	I0425 19:56:17.067734   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 32/120
	I0425 19:56:18.069089   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 33/120
	I0425 19:56:19.070624   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 34/120
	I0425 19:56:20.072702   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 35/120
	I0425 19:56:21.074093   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 36/120
	I0425 19:56:22.075602   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 37/120
	I0425 19:56:23.077020   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 38/120
	I0425 19:56:24.078522   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 39/120
	I0425 19:56:25.080635   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 40/120
	I0425 19:56:26.082025   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 41/120
	I0425 19:56:27.083668   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 42/120
	I0425 19:56:28.084904   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 43/120
	I0425 19:56:29.086324   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 44/120
	I0425 19:56:30.088261   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 45/120
	I0425 19:56:31.089656   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 46/120
	I0425 19:56:32.091085   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 47/120
	I0425 19:56:33.092644   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 48/120
	I0425 19:56:34.094109   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 49/120
	I0425 19:56:35.096544   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 50/120
	I0425 19:56:36.097990   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 51/120
	I0425 19:56:37.099443   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 52/120
	I0425 19:56:38.100859   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 53/120
	I0425 19:56:39.102156   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 54/120
	I0425 19:56:40.104165   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 55/120
	I0425 19:56:41.105682   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 56/120
	I0425 19:56:42.107334   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 57/120
	I0425 19:56:43.108804   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 58/120
	I0425 19:56:44.110408   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 59/120
	I0425 19:56:45.112933   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 60/120
	I0425 19:56:46.114322   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 61/120
	I0425 19:56:47.115743   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 62/120
	I0425 19:56:48.117024   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 63/120
	I0425 19:56:49.118303   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 64/120
	I0425 19:56:50.120234   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 65/120
	I0425 19:56:51.121502   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 66/120
	I0425 19:56:52.122772   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 67/120
	I0425 19:56:53.124057   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 68/120
	I0425 19:56:54.125304   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 69/120
	I0425 19:56:55.127253   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 70/120
	I0425 19:56:56.128574   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 71/120
	I0425 19:56:57.129786   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 72/120
	I0425 19:56:58.131091   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 73/120
	I0425 19:56:59.132357   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 74/120
	I0425 19:57:00.134194   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 75/120
	I0425 19:57:01.135691   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 76/120
	I0425 19:57:02.137292   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 77/120
	I0425 19:57:03.138609   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 78/120
	I0425 19:57:04.140162   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 79/120
	I0425 19:57:05.142490   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 80/120
	I0425 19:57:06.143730   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 81/120
	I0425 19:57:07.145185   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 82/120
	I0425 19:57:08.146562   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 83/120
	I0425 19:57:09.147832   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 84/120
	I0425 19:57:10.150004   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 85/120
	I0425 19:57:11.151576   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 86/120
	I0425 19:57:12.152878   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 87/120
	I0425 19:57:13.154473   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 88/120
	I0425 19:57:14.155758   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 89/120
	I0425 19:57:15.157839   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 90/120
	I0425 19:57:16.159358   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 91/120
	I0425 19:57:17.160751   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 92/120
	I0425 19:57:18.162147   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 93/120
	I0425 19:57:19.163568   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 94/120
	I0425 19:57:20.165845   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 95/120
	I0425 19:57:21.167217   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 96/120
	I0425 19:57:22.168712   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 97/120
	I0425 19:57:23.170050   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 98/120
	I0425 19:57:24.171512   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 99/120
	I0425 19:57:25.173578   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 100/120
	I0425 19:57:26.175145   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 101/120
	I0425 19:57:27.176534   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 102/120
	I0425 19:57:28.177950   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 103/120
	I0425 19:57:29.179363   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 104/120
	I0425 19:57:30.181319   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 105/120
	I0425 19:57:31.182789   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 106/120
	I0425 19:57:32.184063   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 107/120
	I0425 19:57:33.185561   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 108/120
	I0425 19:57:34.187017   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 109/120
	I0425 19:57:35.189244   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 110/120
	I0425 19:57:36.190798   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 111/120
	I0425 19:57:37.192340   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 112/120
	I0425 19:57:38.193679   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 113/120
	I0425 19:57:39.194999   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 114/120
	I0425 19:57:40.197214   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 115/120
	I0425 19:57:41.198750   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 116/120
	I0425 19:57:42.200613   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 117/120
	I0425 19:57:43.201993   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 118/120
	I0425 19:57:44.203454   71181 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for machine to stop 119/120
	I0425 19:57:45.204699   71181 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0425 19:57:45.204748   71181 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0425 19:57:45.206687   71181 out.go:177] 
	W0425 19:57:45.208207   71181 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0425 19:57:45.208239   71181 out.go:239] * 
	* 
	W0425 19:57:45.211116   71181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 19:57:45.212417   71181 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-142196 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196: exit status 3 (18.635870907s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:58:03.850533   71935 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host
	E0425 19:58:03.850557   71935 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-142196" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-210442 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-210442 create -f testdata/busybox.yaml: exit status 1 (44.034353ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-210442" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-210442 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 6 (236.840719ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:57:33.392913   71690 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-210442" does not appear in /home/jenkins/minikube-integration/18757-6355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-210442" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442
E0425 19:57:33.632038   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 6 (236.408289ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:57:33.631899   71746 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-210442" does not appear in /home/jenkins/minikube-integration/18757-6355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-210442" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512173 -n embed-certs-512173
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512173 -n embed-certs-512173: exit status 3 (3.19563841s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:57:36.586557   71735 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	E0425 19:57:36.586578   71735 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-512173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-512173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153199013s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-512173 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512173 -n embed-certs-512173
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512173 -n embed-certs-512173: exit status 3 (3.063370649s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:57:45.802707   71889 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	E0425 19:57:45.802729   71889 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-512173" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (101.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-210442 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-210442 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m40.865920762s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-210442 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-210442 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-210442 describe deploy/metrics-server -n kube-system: exit status 1 (43.647922ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-210442" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-210442 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 6 (237.115827ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:59:14.777266   72599 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-210442" does not appear in /home/jenkins/minikube-integration/18757-6355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-210442" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (101.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744552 -n no-preload-744552
E0425 19:57:56.443113   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 19:57:57.270105   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:57:57.275326   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:57:57.285626   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:57:57.305944   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:57:57.346246   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:57:57.426493   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:57:57.587074   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:57:57.907729   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:57:58.548418   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744552 -n no-preload-744552: exit status 3 (3.167714414s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:57:59.594595   72046 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.142:22: connect: no route to host
	E0425 19:57:59.594616   72046 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.142:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-744552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0425 19:57:59.829190   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:58:02.389494   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-744552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15231005s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.142:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-744552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744552 -n no-preload-744552
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744552 -n no-preload-744552: exit status 3 (3.063118752s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:58:08.810563   72156 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.142:22: connect: no route to host
	E0425 19:58:08.810582   72156 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.142:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-744552" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196: exit status 3 (3.168208456s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:58:07.018530   72126 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host
	E0425 19:58:07.018554   72126 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-142196 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0425 19:58:07.510333   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-142196 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152257518s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-142196 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196: exit status 3 (3.06319998s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0425 19:58:16.234550   72256 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host
	E0425 19:58:16.234582   72256 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.123:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-142196" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (749.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-210442 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0425 19:59:19.192562   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:59:32.292261   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 19:59:43.282135   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:59:49.505887   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:59:55.065600   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:59:59.378716   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 20:00:12.602412   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 20:00:22.749124   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 20:00:40.284483   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 20:00:41.112986   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 20:00:45.439069   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 20:01:05.202892   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 20:01:11.426455   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 20:01:11.709956   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 20:01:39.393296   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 20:01:48.449681   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 20:02:16.133006   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
E0425 20:02:57.270126   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 20:03:21.359291   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 20:03:24.953899   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 20:03:27.582935   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 20:03:36.328009   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 20:03:49.043984   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 20:03:55.267638   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 20:04:55.065275   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 20:05:12.603430   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 20:05:45.438404   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 20:06:11.710741   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 20:06:48.448850   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-210442 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m25.634868475s)

                                                
                                                
-- stdout --
	* [old-k8s-version-210442] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18757
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-210442" primary control-plane node in "old-k8s-version-210442" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-210442" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 19:59:17.353932   72712 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:59:17.354045   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354055   72712 out.go:304] Setting ErrFile to fd 2...
	I0425 19:59:17.354059   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354269   72712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:59:17.354795   72712 out.go:298] Setting JSON to false
	I0425 19:59:17.355681   72712 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6103,"bootTime":1714069054,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:59:17.355740   72712 start.go:139] virtualization: kvm guest
	I0425 19:59:17.357921   72712 out.go:177] * [old-k8s-version-210442] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:59:17.359325   72712 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:59:17.360640   72712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:59:17.359305   72712 notify.go:220] Checking for updates...
	I0425 19:59:17.361801   72712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:59:17.363086   72712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:59:17.364512   72712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:59:17.365842   72712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:59:17.367508   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 19:59:17.367909   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.367946   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.382995   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0425 19:59:17.383362   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.383991   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.384016   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.384378   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.384566   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.386317   72712 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0425 19:59:17.387599   72712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:59:17.387904   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.387948   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.402999   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0425 19:59:17.403506   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.403962   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.403986   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.404318   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.404472   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.438308   72712 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:59:17.439686   72712 start.go:297] selected driver: kvm2
	I0425 19:59:17.439716   72712 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.439831   72712 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:59:17.440486   72712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.440553   72712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:59:17.454719   72712 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:59:17.455114   72712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:59:17.455184   72712 cni.go:84] Creating CNI manager for ""
	I0425 19:59:17.455203   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:59:17.455266   72712 start.go:340] cluster config:
	{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.455393   72712 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.457210   72712 out.go:177] * Starting "old-k8s-version-210442" primary control-plane node in "old-k8s-version-210442" cluster
	I0425 19:59:17.458384   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 19:59:17.458418   72712 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:59:17.458430   72712 cache.go:56] Caching tarball of preloaded images
	I0425 19:59:17.458517   72712 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:59:17.458529   72712 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0425 19:59:17.458638   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 19:59:17.458844   72712 start.go:360] acquireMachinesLock for old-k8s-version-210442: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 20:03:11.255486   72712 start.go:364] duration metric: took 3m53.796595105s to acquireMachinesLock for "old-k8s-version-210442"
	I0425 20:03:11.255550   72712 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:11.255569   72712 fix.go:54] fixHost starting: 
	I0425 20:03:11.256083   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:11.256128   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:11.272950   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0425 20:03:11.273365   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:11.273878   72712 main.go:141] libmachine: Using API Version  1
	I0425 20:03:11.273907   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:11.274277   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:11.274487   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:11.274666   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetState
	I0425 20:03:11.276420   72712 fix.go:112] recreateIfNeeded on old-k8s-version-210442: state=Stopped err=<nil>
	I0425 20:03:11.276454   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	W0425 20:03:11.276608   72712 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:11.279156   72712 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210442" ...
	I0425 20:03:11.280701   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .Start
	I0425 20:03:11.280895   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring networks are active...
	I0425 20:03:11.281729   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network default is active
	I0425 20:03:11.282158   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network mk-old-k8s-version-210442 is active
	I0425 20:03:11.282639   72712 main.go:141] libmachine: (old-k8s-version-210442) Getting domain xml...
	I0425 20:03:11.283399   72712 main.go:141] libmachine: (old-k8s-version-210442) Creating domain...
	I0425 20:03:12.659136   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting to get IP...
	I0425 20:03:12.660227   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.660770   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.660843   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.660724   73691 retry.go:31] will retry after 234.96602ms: waiting for machine to come up
	I0425 20:03:12.897395   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.897966   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.897993   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.897913   73691 retry.go:31] will retry after 387.692223ms: waiting for machine to come up
	I0425 20:03:13.287742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.288414   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.288443   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.288397   73691 retry.go:31] will retry after 461.897892ms: waiting for machine to come up
	I0425 20:03:13.752061   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.752574   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.752603   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.752513   73691 retry.go:31] will retry after 452.347315ms: waiting for machine to come up
	I0425 20:03:14.206275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.206684   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.206708   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.206629   73691 retry.go:31] will retry after 466.12355ms: waiting for machine to come up
	I0425 20:03:14.674265   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.674788   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.674818   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.674735   73691 retry.go:31] will retry after 697.70071ms: waiting for machine to come up
	I0425 20:03:15.373862   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:15.374297   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:15.374325   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:15.374252   73691 retry.go:31] will retry after 835.73273ms: waiting for machine to come up
	I0425 20:03:16.211394   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:16.211870   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:16.211902   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:16.211815   73691 retry.go:31] will retry after 1.26739043s: waiting for machine to come up
	I0425 20:03:17.480654   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:17.481045   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:17.481094   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:17.481007   73691 retry.go:31] will retry after 1.238487953s: waiting for machine to come up
	I0425 20:03:18.720512   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:18.720940   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:18.720965   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:18.720902   73691 retry.go:31] will retry after 2.277078909s: waiting for machine to come up
	I0425 20:03:20.999749   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:21.000275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:21.000305   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:21.000223   73691 retry.go:31] will retry after 2.81059851s: waiting for machine to come up
	I0425 20:03:23.812963   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:23.813457   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:23.813476   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:23.813429   73691 retry.go:31] will retry after 2.508562986s: waiting for machine to come up
	I0425 20:03:26.323267   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:26.323733   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:26.323761   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:26.323699   73691 retry.go:31] will retry after 4.475703543s: waiting for machine to come up
	I0425 20:03:30.803467   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804014   72712 main.go:141] libmachine: (old-k8s-version-210442) Found IP for machine: 192.168.61.136
	I0425 20:03:30.804041   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserving static IP address...
	I0425 20:03:30.804057   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has current primary IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804495   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.804535   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | skip adding static IP to network mk-old-k8s-version-210442 - found existing host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"}
	I0425 20:03:30.804562   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserved static IP address: 192.168.61.136
	I0425 20:03:30.804582   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting for SSH to be available...
	I0425 20:03:30.804599   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Getting to WaitForSSH function...
	I0425 20:03:30.807110   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807533   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.807556   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807706   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH client type: external
	I0425 20:03:30.807725   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa (-rw-------)
	I0425 20:03:30.807767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:30.807783   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | About to run SSH command:
	I0425 20:03:30.807815   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | exit 0
	I0425 20:03:30.935091   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:30.935445   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetConfigRaw
	I0425 20:03:30.936168   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:30.938767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939193   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.939246   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939428   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 20:03:30.939630   72712 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:30.939649   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:30.939870   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:30.942320   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.942771   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942923   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:30.943113   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943306   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943468   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:30.943640   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:30.943842   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:30.943854   72712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:31.052598   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:31.052625   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.052821   72712 buildroot.go:166] provisioning hostname "old-k8s-version-210442"
	I0425 20:03:31.052844   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.053080   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.056324   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056713   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.056745   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056885   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.057056   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057190   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057375   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.057549   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.057724   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.057742   72712 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210442 && echo "old-k8s-version-210442" | sudo tee /etc/hostname
	I0425 20:03:31.188461   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210442
	
	I0425 20:03:31.188494   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.191628   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192088   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.192117   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192332   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.192519   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192655   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192767   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.192944   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.193142   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.193167   72712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:31.317374   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:31.317402   72712 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:31.317436   72712 buildroot.go:174] setting up certificates
	I0425 20:03:31.317447   72712 provision.go:84] configureAuth start
	I0425 20:03:31.317461   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.317778   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:31.321012   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321388   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.321421   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321698   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.323976   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324326   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.324354   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324523   72712 provision.go:143] copyHostCerts
	I0425 20:03:31.324573   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:31.324584   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:31.324656   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:31.324764   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:31.324778   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:31.324807   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:31.324879   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:31.324890   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:31.324915   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:31.324978   72712 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210442 san=[127.0.0.1 192.168.61.136 localhost minikube old-k8s-version-210442]
	I0425 20:03:31.410674   72712 provision.go:177] copyRemoteCerts
	I0425 20:03:31.410728   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:31.410755   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.413170   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413449   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.413491   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413634   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.413832   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.413988   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.414156   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:31.502759   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:31.536662   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0425 20:03:31.565106   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:31.593254   72712 provision.go:87] duration metric: took 275.793443ms to configureAuth
	I0425 20:03:31.593287   72712 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:31.593621   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 20:03:31.593720   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.596515   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.596827   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.596859   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.597057   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.597287   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597448   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597624   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.597775   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.597927   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.597942   72712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:31.925149   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:31.925182   72712 machine.go:97] duration metric: took 985.540626ms to provisionDockerMachine
	I0425 20:03:31.925199   72712 start.go:293] postStartSetup for "old-k8s-version-210442" (driver="kvm2")
	I0425 20:03:31.925211   72712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:31.925258   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:31.925560   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:31.925596   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.928532   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.928982   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.929013   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.929232   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.929458   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.929637   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.929787   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.023009   72712 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:32.029391   72712 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:32.029426   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:32.029508   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:32.029576   72712 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:32.029664   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:32.046596   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:32.077323   72712 start.go:296] duration metric: took 152.112632ms for postStartSetup
	I0425 20:03:32.077396   72712 fix.go:56] duration metric: took 20.821829703s for fixHost
	I0425 20:03:32.077425   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.080136   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080477   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.080526   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080636   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.080836   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081067   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081283   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.081493   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:32.081695   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:32.081711   72712 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0425 20:03:32.187617   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075412.163072845
	
	I0425 20:03:32.187642   72712 fix.go:216] guest clock: 1714075412.163072845
	I0425 20:03:32.187652   72712 fix.go:229] Guest: 2024-04-25 20:03:32.163072845 +0000 UTC Remote: 2024-04-25 20:03:32.07740605 +0000 UTC m=+254.767943919 (delta=85.666795ms)
	I0425 20:03:32.187675   72712 fix.go:200] guest clock delta is within tolerance: 85.666795ms
	I0425 20:03:32.187682   72712 start.go:83] releasing machines lock for "old-k8s-version-210442", held for 20.932154384s
	I0425 20:03:32.187709   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.187998   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:32.190538   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.190898   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.190932   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.191077   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191817   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191996   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.192076   72712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:32.192116   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.192208   72712 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:32.192230   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.194821   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.194988   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195191   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195212   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195334   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195368   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195500   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195673   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195677   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195847   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195866   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196063   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.196083   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196219   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.276462   72712 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:32.300979   72712 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:32.458446   72712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:32.465434   72712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:32.465518   72712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:32.486929   72712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:32.486954   72712 start.go:494] detecting cgroup driver to use...
	I0425 20:03:32.487019   72712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:32.509425   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:32.530999   72712 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:32.531059   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:32.547280   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:32.563594   72712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:32.699207   72712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:32.875013   72712 docker.go:233] disabling docker service ...
	I0425 20:03:32.875096   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:32.897149   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:32.916105   72712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:33.071143   72712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:33.231529   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:33.252919   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:33.277388   72712 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0425 20:03:33.277457   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.290889   72712 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:33.290953   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.305488   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.319263   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.332961   72712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:33.354086   72712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:33.373431   72712 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:33.373517   72712 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:33.398458   72712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:33.418683   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:33.595555   72712 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:33.808015   72712 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:33.810391   72712 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:33.817593   72712 start.go:562] Will wait 60s for crictl version
	I0425 20:03:33.817646   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:33.823381   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:33.866310   72712 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:33.866411   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.905561   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.952764   72712 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0425 20:03:33.954161   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:33.957316   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.957778   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:33.957811   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.958080   72712 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:33.964467   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:33.984277   72712 kubeadm.go:877] updating cluster {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:33.984437   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 20:03:33.984499   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:34.049402   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:34.049479   72712 ssh_runner.go:195] Run: which lz4
	I0425 20:03:34.055519   72712 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0425 20:03:34.061481   72712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:34.061522   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0425 20:03:36.271646   72712 crio.go:462] duration metric: took 2.216165414s to copy over tarball
	I0425 20:03:36.271722   72712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:39.894331   72712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.622580176s)
	I0425 20:03:39.894364   72712 crio.go:469] duration metric: took 3.62268463s to extract the tarball
	I0425 20:03:39.894373   72712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:39.965071   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:40.009534   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:40.009561   72712 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:03:40.009629   72712 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.009651   72712 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.009677   72712 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.009662   72712 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.009794   72712 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.009920   72712 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.010033   72712 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.010241   72712 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0425 20:03:40.011305   72712 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.011334   72712 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.011346   72712 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.011686   72712 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.012422   72712 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.012429   72712 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.012437   72712 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0425 20:03:40.012546   72712 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.143545   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.155203   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.157842   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.158081   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.161210   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.166515   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.181859   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0425 20:03:40.301699   72712 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0425 20:03:40.301759   72712 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.301805   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.379386   72712 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0425 20:03:40.379445   72712 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.379490   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406119   72712 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0425 20:03:40.406231   72712 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.406174   72712 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0425 20:03:40.406338   72712 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.406365   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406389   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420450   72712 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0425 20:03:40.420495   72712 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.420548   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420461   72712 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0425 20:03:40.420629   72712 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.420677   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430055   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.430110   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.430232   72712 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0425 20:03:40.430263   72712 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0425 20:03:40.430274   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.430277   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.430303   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430326   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.430389   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.582980   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0425 20:03:40.583094   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0425 20:03:40.587500   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0425 20:03:40.587564   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0425 20:03:40.587579   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0425 20:03:40.587650   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0425 20:03:40.587697   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0425 20:03:40.625942   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0425 20:03:40.941957   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:41.096086   72712 cache_images.go:92] duration metric: took 1.086507707s to LoadCachedImages
	W0425 20:03:41.096249   72712 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0425 20:03:41.096279   72712 kubeadm.go:928] updating node { 192.168.61.136 8443 v1.20.0 crio true true} ...
	I0425 20:03:41.096415   72712 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210442 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:41.096509   72712 ssh_runner.go:195] Run: crio config
	I0425 20:03:41.169311   72712 cni.go:84] Creating CNI manager for ""
	I0425 20:03:41.169341   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:41.169357   72712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:41.169397   72712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210442 NodeName:old-k8s-version-210442 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0425 20:03:41.169570   72712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210442"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:41.169639   72712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0425 20:03:41.182191   72712 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:41.182283   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:41.193546   72712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0425 20:03:41.218220   72712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:41.238647   72712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0425 20:03:41.259040   72712 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:41.263603   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:41.278007   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:41.425587   72712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:41.450990   72712 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442 for IP: 192.168.61.136
	I0425 20:03:41.451013   72712 certs.go:194] generating shared ca certs ...
	I0425 20:03:41.451034   72712 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:41.451225   72712 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:41.451307   72712 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:41.451323   72712 certs.go:256] generating profile certs ...
	I0425 20:03:41.451449   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.key
	I0425 20:03:41.451528   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key.1533c9ac
	I0425 20:03:41.451587   72712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key
	I0425 20:03:41.451789   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:41.451860   72712 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:41.451880   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:41.451915   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:41.451945   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:41.451968   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:41.452023   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:41.452870   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:41.510467   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:41.555595   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:41.606059   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:41.648206   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0425 20:03:41.690090   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:41.727674   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:41.766537   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:41.799524   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:41.828668   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:41.860964   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:41.890272   72712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:41.911787   72712 ssh_runner.go:195] Run: openssl version
	I0425 20:03:41.918926   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:41.933194   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.938995   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.939060   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.945934   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:41.959859   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:41.974906   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.980931   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.981006   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.987789   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:42.002455   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:42.016797   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023789   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023853   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.033189   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:42.047467   72712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:42.053552   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:42.063130   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:42.070290   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:42.079527   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:42.087983   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:42.096658   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:42.103477   72712 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:42.103596   72712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:42.103649   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.155980   72712 cri.go:89] found id: ""
	I0425 20:03:42.156085   72712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:42.172499   72712 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:42.172525   72712 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:42.172532   72712 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:42.172580   72712 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:42.187864   72712 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:42.188948   72712 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210442" does not appear in /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:42.189659   72712 kubeconfig.go:62] /home/jenkins/minikube-integration/18757-6355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210442" cluster setting kubeconfig missing "old-k8s-version-210442" context setting]
	I0425 20:03:42.190635   72712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:42.192402   72712 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:42.207284   72712 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.136
	I0425 20:03:42.207318   72712 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:42.207329   72712 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:42.207403   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.251184   72712 cri.go:89] found id: ""
	I0425 20:03:42.251257   72712 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:42.271727   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:42.289161   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:42.289184   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:42.289237   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:42.302492   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:42.302588   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:42.317790   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:42.329940   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:42.330002   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:42.342772   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:42.356480   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:42.357280   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:42.370403   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:42.384245   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:42.384332   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:42.398271   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:42.412361   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:42.575076   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.186458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.480114   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.594128   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.707129   72712 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:43.707221   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.207406   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.707733   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.208100   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.708041   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.207966   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.707255   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:47.207754   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:47.707730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.208213   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.707685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.207879   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.707914   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.208278   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.707691   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.207600   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.707365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:52.207931   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:52.707459   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.208241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.707431   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.207538   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.707289   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.207319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.707625   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.207562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.708324   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:57.207348   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:57.707868   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.208319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.207410   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.707562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.208006   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.708245   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.208178   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.707239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:02.207926   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:02.707796   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.207913   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.708267   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.207491   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.707894   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.207346   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.707801   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.208283   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.707342   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:07.208190   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:07.707466   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.207370   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.707951   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.207604   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.708057   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.207422   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.707391   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.207510   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.707828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:12.207519   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:12.707816   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.207561   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.708264   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.207822   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.707509   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.207507   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.707899   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.208254   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.708246   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:17.207508   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:17.707948   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.207953   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.707659   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.207609   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.707567   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.207989   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.707938   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.208305   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.707827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:22.207940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:22.707381   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.207532   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.707461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.208239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.707742   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.208365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.707323   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.207485   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.707727   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:27.208332   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:27.707275   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.207776   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.708096   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.207685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.708249   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.207647   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.707943   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.207471   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.707902   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:32.207582   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:32.708066   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.208090   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.707474   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.207664   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.708110   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.208160   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.707940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.207505   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.708334   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:37.207939   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:37.707256   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.207621   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.708237   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.208327   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.707542   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.207371   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.708300   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.207577   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.708097   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:42.207684   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:42.708257   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.207407   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.707548   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:43.707618   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:43.753656   72712 cri.go:89] found id: ""
	I0425 20:04:43.753686   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.753698   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:43.753706   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:43.753770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:43.797957   72712 cri.go:89] found id: ""
	I0425 20:04:43.797982   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.797991   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:43.797996   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:43.798051   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:43.836700   72712 cri.go:89] found id: ""
	I0425 20:04:43.836729   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.836737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:43.836742   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:43.836795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:43.883452   72712 cri.go:89] found id: ""
	I0425 20:04:43.883478   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.883486   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:43.883492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:43.883544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:43.929975   72712 cri.go:89] found id: ""
	I0425 20:04:43.930004   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.930014   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:43.930022   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:43.930089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:43.967648   72712 cri.go:89] found id: ""
	I0425 20:04:43.967681   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.967693   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:43.967701   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:43.967758   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:44.011024   72712 cri.go:89] found id: ""
	I0425 20:04:44.011048   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.011072   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:44.011078   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:44.011129   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:44.050233   72712 cri.go:89] found id: ""
	I0425 20:04:44.050263   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.050274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:44.050286   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:44.050302   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:44.196275   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:44.196307   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:44.196323   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:44.260707   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:44.260748   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:44.306051   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:44.306090   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:44.357643   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:44.357682   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:46.875982   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:46.890987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:46.891062   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:46.935855   72712 cri.go:89] found id: ""
	I0425 20:04:46.935878   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.935885   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:46.935891   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:46.935948   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:46.978634   72712 cri.go:89] found id: ""
	I0425 20:04:46.978662   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.978674   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:46.978681   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:46.978749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:47.019845   72712 cri.go:89] found id: ""
	I0425 20:04:47.019864   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.019872   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:47.019877   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:47.019933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:47.065002   72712 cri.go:89] found id: ""
	I0425 20:04:47.065040   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.065064   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:47.065072   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:47.065139   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:47.106370   72712 cri.go:89] found id: ""
	I0425 20:04:47.106404   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.106416   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:47.106423   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:47.106483   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:47.143851   72712 cri.go:89] found id: ""
	I0425 20:04:47.143874   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.143883   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:47.143888   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:47.143932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:47.186130   72712 cri.go:89] found id: ""
	I0425 20:04:47.186160   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.186168   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:47.186174   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:47.186238   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:47.228959   72712 cri.go:89] found id: ""
	I0425 20:04:47.228984   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.228992   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:47.229000   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:47.229010   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:47.299852   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:47.299893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:47.346078   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:47.346111   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:47.405897   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:47.405932   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:47.424426   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:47.424455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:47.506603   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.007697   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:50.023258   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:50.023333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:50.066794   72712 cri.go:89] found id: ""
	I0425 20:04:50.066827   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.066836   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:50.066842   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:50.066913   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:50.109167   72712 cri.go:89] found id: ""
	I0425 20:04:50.109200   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.109212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:50.109219   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:50.109306   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:50.151854   72712 cri.go:89] found id: ""
	I0425 20:04:50.151878   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.151886   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:50.151892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:50.151940   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:50.190600   72712 cri.go:89] found id: ""
	I0425 20:04:50.190632   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.190644   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:50.190672   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:50.190742   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:50.232851   72712 cri.go:89] found id: ""
	I0425 20:04:50.232874   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.232883   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:50.232889   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:50.232935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:50.274941   72712 cri.go:89] found id: ""
	I0425 20:04:50.274971   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.274983   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:50.274990   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:50.275069   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:50.320954   72712 cri.go:89] found id: ""
	I0425 20:04:50.320981   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.320992   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:50.320999   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:50.321068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:50.361799   72712 cri.go:89] found id: ""
	I0425 20:04:50.361829   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.361839   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:50.361847   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:50.361858   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:50.457792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.457819   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:50.457834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:50.539653   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:50.539702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:50.598740   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:50.598774   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:50.650501   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:50.650533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:53.167827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:53.183324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:53.183403   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:53.227598   72712 cri.go:89] found id: ""
	I0425 20:04:53.227641   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.227650   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:53.227655   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:53.227700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:53.271170   72712 cri.go:89] found id: ""
	I0425 20:04:53.271200   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.271212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:53.271220   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:53.271304   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:53.318185   72712 cri.go:89] found id: ""
	I0425 20:04:53.318233   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.318246   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:53.318255   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:53.318324   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:53.372199   72712 cri.go:89] found id: ""
	I0425 20:04:53.372228   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.372238   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:53.372244   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:53.372367   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:53.414048   72712 cri.go:89] found id: ""
	I0425 20:04:53.414080   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.414091   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:53.414099   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:53.414170   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:53.455746   72712 cri.go:89] found id: ""
	I0425 20:04:53.455806   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.455819   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:53.455827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:53.455901   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:53.497969   72712 cri.go:89] found id: ""
	I0425 20:04:53.497996   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.498004   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:53.498011   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:53.498057   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:53.543642   72712 cri.go:89] found id: ""
	I0425 20:04:53.543668   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.543675   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:53.543684   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:53.543693   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:53.596106   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:53.596144   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:53.612755   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:53.612787   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:53.693068   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:53.693089   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:53.693102   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:53.771499   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:53.771535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:56.322663   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:56.336866   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:56.336945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:56.375515   72712 cri.go:89] found id: ""
	I0425 20:04:56.375556   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.375567   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:56.375574   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:56.375641   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:56.423230   72712 cri.go:89] found id: ""
	I0425 20:04:56.423261   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.423273   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:56.423281   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:56.423366   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:56.467786   72712 cri.go:89] found id: ""
	I0425 20:04:56.467814   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.467835   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:56.467842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:56.467895   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:56.517671   72712 cri.go:89] found id: ""
	I0425 20:04:56.517696   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.517708   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:56.517715   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:56.517770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:56.558622   72712 cri.go:89] found id: ""
	I0425 20:04:56.558651   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.558662   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:56.558669   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:56.558746   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:56.601350   72712 cri.go:89] found id: ""
	I0425 20:04:56.601374   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.601382   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:56.601387   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:56.601444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:56.645892   72712 cri.go:89] found id: ""
	I0425 20:04:56.645923   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.645934   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:56.645940   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:56.646001   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:56.691619   72712 cri.go:89] found id: ""
	I0425 20:04:56.691645   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.691656   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:56.691665   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:56.691679   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:56.744854   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:56.744891   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:56.762523   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:56.762556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:56.843396   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:56.843422   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:56.843437   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:56.933785   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:56.933825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:59.481512   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:59.497510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:59.497588   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:59.547382   72712 cri.go:89] found id: ""
	I0425 20:04:59.547412   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.547423   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:59.547432   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:59.547486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:59.597671   72712 cri.go:89] found id: ""
	I0425 20:04:59.597699   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.597711   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:59.597717   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:59.597762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:59.641455   72712 cri.go:89] found id: ""
	I0425 20:04:59.641486   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.641497   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:59.641503   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:59.641613   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:59.685052   72712 cri.go:89] found id: ""
	I0425 20:04:59.685092   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.685104   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:59.685112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:59.685173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:59.735912   72712 cri.go:89] found id: ""
	I0425 20:04:59.735943   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.735951   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:59.735957   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:59.736025   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:59.799294   72712 cri.go:89] found id: ""
	I0425 20:04:59.799322   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.799332   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:59.799338   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:59.799395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:59.871270   72712 cri.go:89] found id: ""
	I0425 20:04:59.871297   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.871308   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:59.871315   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:59.871377   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:59.919001   72712 cri.go:89] found id: ""
	I0425 20:04:59.919091   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.919110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:59.919120   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:59.919135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:59.973458   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:59.973498   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:59.989729   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:59.989757   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:00.072887   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:00.072911   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:00.072926   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:00.153886   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:00.153921   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:02.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:02.722771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:02.722831   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:02.770101   72712 cri.go:89] found id: ""
	I0425 20:05:02.770134   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.770147   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:02.770154   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:02.770224   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:02.817819   72712 cri.go:89] found id: ""
	I0425 20:05:02.817854   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.817865   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:02.817898   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:02.817963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:02.857036   72712 cri.go:89] found id: ""
	I0425 20:05:02.857066   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.857077   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:02.857085   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:02.857144   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:02.900112   72712 cri.go:89] found id: ""
	I0425 20:05:02.900145   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.900157   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:02.900164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:02.900221   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:02.941079   72712 cri.go:89] found id: ""
	I0425 20:05:02.941109   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.941116   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:02.941121   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:02.941198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:02.983458   72712 cri.go:89] found id: ""
	I0425 20:05:02.983490   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.983502   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:02.983510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:02.983574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:03.025424   72712 cri.go:89] found id: ""
	I0425 20:05:03.025451   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.025462   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:03.025469   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:03.025556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:03.065285   72712 cri.go:89] found id: ""
	I0425 20:05:03.065316   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.065328   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:03.065340   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:03.065351   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:03.121235   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:03.121267   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:03.138036   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:03.138073   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:03.213604   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:03.213638   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:03.213655   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:03.296696   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:03.296741   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.842642   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:05.859125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:05.859199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:05.906505   72712 cri.go:89] found id: ""
	I0425 20:05:05.906529   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.906537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:05.906542   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:05.906595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:05.950793   72712 cri.go:89] found id: ""
	I0425 20:05:05.950819   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.950831   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:05.950838   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:05.950902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:05.991612   72712 cri.go:89] found id: ""
	I0425 20:05:05.991644   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.991654   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:05.991661   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:05.991755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:06.032273   72712 cri.go:89] found id: ""
	I0425 20:05:06.032314   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.032326   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:06.032334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:06.032392   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:06.071802   72712 cri.go:89] found id: ""
	I0425 20:05:06.071833   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.071844   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:06.071852   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:06.071908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:06.116676   72712 cri.go:89] found id: ""
	I0425 20:05:06.116702   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.116710   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:06.116716   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:06.116759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:06.154720   72712 cri.go:89] found id: ""
	I0425 20:05:06.154753   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.154765   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:06.154771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:06.154842   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:06.196421   72712 cri.go:89] found id: ""
	I0425 20:05:06.196457   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.196469   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:06.196480   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:06.196493   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:06.251061   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:06.251122   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:06.267764   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:06.267799   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:06.345302   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:06.345334   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:06.345349   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:06.427836   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:06.427868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:08.989442   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:09.004493   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:09.004551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:09.056062   72712 cri.go:89] found id: ""
	I0425 20:05:09.056086   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.056096   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:09.056101   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:09.056148   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:09.096791   72712 cri.go:89] found id: ""
	I0425 20:05:09.096817   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.096827   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:09.096834   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:09.096889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:09.134649   72712 cri.go:89] found id: ""
	I0425 20:05:09.134680   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.134691   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:09.134698   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:09.134757   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:09.175980   72712 cri.go:89] found id: ""
	I0425 20:05:09.176010   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.176021   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:09.176028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:09.176084   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:09.216263   72712 cri.go:89] found id: ""
	I0425 20:05:09.216299   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.216313   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:09.216325   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:09.216395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:09.260498   72712 cri.go:89] found id: ""
	I0425 20:05:09.260528   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.260538   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:09.260544   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:09.260603   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:09.303154   72712 cri.go:89] found id: ""
	I0425 20:05:09.303178   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.303201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:09.303209   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:09.303269   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:09.350798   72712 cri.go:89] found id: ""
	I0425 20:05:09.350829   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.350840   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:09.350852   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:09.350868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:09.405295   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:09.405332   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:09.422788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:09.422820   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:09.501819   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:09.501841   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:09.501855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:09.586938   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:09.586981   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:12.132731   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:12.148860   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:12.148935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:12.194021   72712 cri.go:89] found id: ""
	I0425 20:05:12.194051   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.194064   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:12.194072   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:12.194152   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:12.234680   72712 cri.go:89] found id: ""
	I0425 20:05:12.234710   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.234721   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:12.234728   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:12.234792   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:12.277751   72712 cri.go:89] found id: ""
	I0425 20:05:12.277783   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.277794   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:12.277802   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:12.277864   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:12.324068   72712 cri.go:89] found id: ""
	I0425 20:05:12.324100   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.324117   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:12.324125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:12.324187   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:12.366797   72712 cri.go:89] found id: ""
	I0425 20:05:12.366825   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.366837   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:12.366844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:12.366903   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:12.413092   72712 cri.go:89] found id: ""
	I0425 20:05:12.413120   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.413132   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:12.413139   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:12.413198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:12.461229   72712 cri.go:89] found id: ""
	I0425 20:05:12.461253   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.461262   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:12.461268   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:12.461333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:12.504646   72712 cri.go:89] found id: ""
	I0425 20:05:12.504669   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.504677   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:12.504685   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:12.504698   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:12.561630   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:12.561673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:12.578043   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:12.578069   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:12.655176   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:12.655195   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:12.655209   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:12.736323   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:12.736357   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.287503   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:15.302830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:15.302893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:15.339479   72712 cri.go:89] found id: ""
	I0425 20:05:15.339509   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.339519   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:15.339527   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:15.339589   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:15.381431   72712 cri.go:89] found id: ""
	I0425 20:05:15.381458   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.381467   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:15.381475   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:15.381537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:15.423729   72712 cri.go:89] found id: ""
	I0425 20:05:15.423755   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.423767   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:15.423774   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:15.423833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:15.464367   72712 cri.go:89] found id: ""
	I0425 20:05:15.464401   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.464413   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:15.464421   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:15.464489   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:15.508306   72712 cri.go:89] found id: ""
	I0425 20:05:15.508336   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.508348   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:15.508356   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:15.508419   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:15.548572   72712 cri.go:89] found id: ""
	I0425 20:05:15.548600   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.548610   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:15.548616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:15.548678   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:15.592885   72712 cri.go:89] found id: ""
	I0425 20:05:15.592914   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.592926   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:15.592933   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:15.592992   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:15.632817   72712 cri.go:89] found id: ""
	I0425 20:05:15.632855   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.632868   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:15.632880   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:15.632900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:15.648443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:15.648470   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:15.726167   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:15.726191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:15.726229   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:15.803028   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:15.803066   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.850519   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:15.850552   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:18.404671   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:18.422600   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:18.422663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:18.476977   72712 cri.go:89] found id: ""
	I0425 20:05:18.477001   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.477009   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:18.477021   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:18.477093   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:18.525595   72712 cri.go:89] found id: ""
	I0425 20:05:18.525631   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.525641   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:18.525648   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:18.525714   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:18.565485   72712 cri.go:89] found id: ""
	I0425 20:05:18.565513   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.565523   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:18.565531   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:18.565600   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:18.612059   72712 cri.go:89] found id: ""
	I0425 20:05:18.612096   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.612106   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:18.612112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:18.612173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:18.659407   72712 cri.go:89] found id: ""
	I0425 20:05:18.659438   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.659449   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:18.659456   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:18.659507   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:18.701065   72712 cri.go:89] found id: ""
	I0425 20:05:18.701092   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.701101   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:18.701106   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:18.701201   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:18.738234   72712 cri.go:89] found id: ""
	I0425 20:05:18.738264   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.738276   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:18.738284   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:18.738343   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:18.780460   72712 cri.go:89] found id: ""
	I0425 20:05:18.780489   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.780498   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:18.780514   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:18.780526   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:18.834345   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:18.834378   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:18.850006   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:18.850033   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:18.932146   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:18.932171   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:18.932185   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:19.015036   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:19.015068   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:21.568250   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:21.582519   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:21.582595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:21.622886   72712 cri.go:89] found id: ""
	I0425 20:05:21.622913   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.622920   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:21.622925   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:21.622974   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:21.664832   72712 cri.go:89] found id: ""
	I0425 20:05:21.664860   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.664874   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:21.664882   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:21.664950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:21.703801   72712 cri.go:89] found id: ""
	I0425 20:05:21.703829   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.703843   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:21.703850   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:21.703911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:21.741502   72712 cri.go:89] found id: ""
	I0425 20:05:21.741540   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.741549   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:21.741555   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:21.741612   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:21.783715   72712 cri.go:89] found id: ""
	I0425 20:05:21.783745   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.783754   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:21.783759   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:21.783803   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:21.822806   72712 cri.go:89] found id: ""
	I0425 20:05:21.822842   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.822851   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:21.822856   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:21.822915   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:21.864996   72712 cri.go:89] found id: ""
	I0425 20:05:21.865020   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.865030   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:21.865037   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:21.865092   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:21.907533   72712 cri.go:89] found id: ""
	I0425 20:05:21.907563   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.907575   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:21.907585   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:21.907601   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:21.964226   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:21.964260   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:21.980096   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:21.980123   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:22.059516   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:22.059539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:22.059566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:22.136752   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:22.136784   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:24.682139   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:24.697495   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:24.697564   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:24.739725   72712 cri.go:89] found id: ""
	I0425 20:05:24.739750   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.739760   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:24.739766   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:24.739824   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:24.777455   72712 cri.go:89] found id: ""
	I0425 20:05:24.777485   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.777497   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:24.777504   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:24.777566   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:24.821729   72712 cri.go:89] found id: ""
	I0425 20:05:24.821761   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.821774   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:24.821782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:24.821845   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:24.861745   72712 cri.go:89] found id: ""
	I0425 20:05:24.861773   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.861784   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:24.861791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:24.861851   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:24.903441   72712 cri.go:89] found id: ""
	I0425 20:05:24.903470   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.903479   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:24.903486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:24.903544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:24.943589   72712 cri.go:89] found id: ""
	I0425 20:05:24.943618   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.943629   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:24.943637   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:24.943717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:24.983629   72712 cri.go:89] found id: ""
	I0425 20:05:24.983661   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.983672   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:24.983680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:24.983739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:25.022413   72712 cri.go:89] found id: ""
	I0425 20:05:25.022441   72712 logs.go:276] 0 containers: []
	W0425 20:05:25.022451   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:25.022462   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:25.022477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:25.077402   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:25.077438   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:25.094488   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:25.094517   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:25.171485   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:25.171515   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:25.171535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:25.251131   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:25.251166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:27.797359   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:27.813601   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:27.813659   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:27.854017   72712 cri.go:89] found id: ""
	I0425 20:05:27.854051   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.854061   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:27.854066   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:27.854117   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:27.900425   72712 cri.go:89] found id: ""
	I0425 20:05:27.900451   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.900461   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:27.900468   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:27.900531   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:27.940064   72712 cri.go:89] found id: ""
	I0425 20:05:27.940096   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.940107   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:27.940114   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:27.940174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:27.979363   72712 cri.go:89] found id: ""
	I0425 20:05:27.979385   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.979393   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:27.979399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:27.979442   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:28.019702   72712 cri.go:89] found id: ""
	I0425 20:05:28.019723   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.019731   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:28.019736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:28.019798   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:28.058711   72712 cri.go:89] found id: ""
	I0425 20:05:28.058740   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.058748   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:28.058755   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:28.058810   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:28.104465   72712 cri.go:89] found id: ""
	I0425 20:05:28.104495   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.104507   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:28.104515   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:28.104577   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:28.142399   72712 cri.go:89] found id: ""
	I0425 20:05:28.142431   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.142440   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:28.142449   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:28.142460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:28.222763   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:28.222786   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:28.222801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:28.299797   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:28.299838   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:28.366569   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:28.366594   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:28.424581   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:28.424628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:30.942526   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:30.957400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:30.957482   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:30.996931   72712 cri.go:89] found id: ""
	I0425 20:05:30.996958   72712 logs.go:276] 0 containers: []
	W0425 20:05:30.996967   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:30.996974   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:30.997029   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:31.035673   72712 cri.go:89] found id: ""
	I0425 20:05:31.035700   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.035712   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:31.035719   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:31.035782   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:31.075783   72712 cri.go:89] found id: ""
	I0425 20:05:31.075809   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.075820   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:31.075826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:31.075886   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:31.114229   72712 cri.go:89] found id: ""
	I0425 20:05:31.114257   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.114267   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:31.114274   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:31.114333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:31.155385   72712 cri.go:89] found id: ""
	I0425 20:05:31.155409   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.155419   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:31.155427   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:31.155486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:31.193772   72712 cri.go:89] found id: ""
	I0425 20:05:31.193804   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.193815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:31.193823   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:31.193878   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:31.233886   72712 cri.go:89] found id: ""
	I0425 20:05:31.233909   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.233917   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:31.233923   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:31.233967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:31.273427   72712 cri.go:89] found id: ""
	I0425 20:05:31.273455   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.273465   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:31.273476   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:31.273491   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:31.354429   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:31.354462   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:31.406018   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:31.406047   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:31.460972   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:31.461007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:31.477485   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:31.477513   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:31.551616   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:34.052808   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:34.068068   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:34.068158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:34.120984   72712 cri.go:89] found id: ""
	I0425 20:05:34.121016   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.121024   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:34.121032   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:34.121082   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:34.160646   72712 cri.go:89] found id: ""
	I0425 20:05:34.160676   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.160687   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:34.160694   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:34.160752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:34.202641   72712 cri.go:89] found id: ""
	I0425 20:05:34.202665   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.202671   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:34.202677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:34.202733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:34.244352   72712 cri.go:89] found id: ""
	I0425 20:05:34.244379   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.244391   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:34.244400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:34.244460   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:34.285858   72712 cri.go:89] found id: ""
	I0425 20:05:34.285885   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.285896   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:34.285904   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:34.285956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:34.323634   72712 cri.go:89] found id: ""
	I0425 20:05:34.323662   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.323673   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:34.323681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:34.323739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:34.365230   72712 cri.go:89] found id: ""
	I0425 20:05:34.365256   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.365272   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:34.365280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:34.365339   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:34.409329   72712 cri.go:89] found id: ""
	I0425 20:05:34.409354   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.409365   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:34.409376   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:34.409390   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:34.464575   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:34.464606   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:34.480244   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:34.480270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:34.560204   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:34.560224   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:34.560236   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:34.640152   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:34.640187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:37.189992   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:37.204683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:37.204786   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:37.245857   72712 cri.go:89] found id: ""
	I0425 20:05:37.245891   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.245903   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:37.245910   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:37.245969   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:37.284668   72712 cri.go:89] found id: ""
	I0425 20:05:37.284696   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.284704   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:37.284710   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:37.284762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:37.324349   72712 cri.go:89] found id: ""
	I0425 20:05:37.324379   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.324391   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:37.324399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:37.324461   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:37.361764   72712 cri.go:89] found id: ""
	I0425 20:05:37.361787   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.361800   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:37.361811   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:37.361857   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:37.404331   72712 cri.go:89] found id: ""
	I0425 20:05:37.404353   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.404360   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:37.404366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:37.404430   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:37.445284   72712 cri.go:89] found id: ""
	I0425 20:05:37.445316   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.445327   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:37.445334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:37.445395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:37.483806   72712 cri.go:89] found id: ""
	I0425 20:05:37.483828   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.483837   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:37.483843   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:37.483888   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:37.524649   72712 cri.go:89] found id: ""
	I0425 20:05:37.524673   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.524680   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:37.524689   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:37.524701   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:37.581521   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:37.581553   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:37.598459   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:37.598487   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:37.671236   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:37.671256   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:37.671272   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:37.750517   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:37.750556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.293743   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:40.310344   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:40.310426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:40.356157   72712 cri.go:89] found id: ""
	I0425 20:05:40.356198   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.356208   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:40.356215   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:40.356277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:40.397857   72712 cri.go:89] found id: ""
	I0425 20:05:40.397886   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.397895   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:40.397902   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:40.397964   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:40.445034   72712 cri.go:89] found id: ""
	I0425 20:05:40.445057   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.445065   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:40.445071   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:40.445126   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:40.493744   72712 cri.go:89] found id: ""
	I0425 20:05:40.493773   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.493783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:40.493797   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:40.493856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:40.550546   72712 cri.go:89] found id: ""
	I0425 20:05:40.550572   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.550580   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:40.550587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:40.550654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:40.605122   72712 cri.go:89] found id: ""
	I0425 20:05:40.605153   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.605164   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:40.605172   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:40.605232   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:40.675713   72712 cri.go:89] found id: ""
	I0425 20:05:40.675745   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.675755   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:40.675769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:40.675828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:40.716064   72712 cri.go:89] found id: ""
	I0425 20:05:40.716093   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.716101   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:40.716109   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:40.716120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:40.781395   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:40.781441   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:40.797597   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:40.797628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:40.880931   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:40.880956   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:40.880971   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:40.970770   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:40.970800   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:43.520389   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:43.537668   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:43.537729   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:43.578137   72712 cri.go:89] found id: ""
	I0425 20:05:43.578166   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.578175   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:43.578180   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:43.578247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:43.617428   72712 cri.go:89] found id: ""
	I0425 20:05:43.617454   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.617462   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:43.617466   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:43.617519   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:43.655401   72712 cri.go:89] found id: ""
	I0425 20:05:43.655431   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.655443   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:43.655450   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:43.655514   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:43.695183   72712 cri.go:89] found id: ""
	I0425 20:05:43.695212   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.695230   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:43.695238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:43.695316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:43.735056   72712 cri.go:89] found id: ""
	I0425 20:05:43.735086   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.735098   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:43.735104   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:43.735162   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:43.774761   72712 cri.go:89] found id: ""
	I0425 20:05:43.774789   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.774799   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:43.774830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:43.774889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:43.819102   72712 cri.go:89] found id: ""
	I0425 20:05:43.819128   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.819138   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:43.819146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:43.819206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:43.858235   72712 cri.go:89] found id: ""
	I0425 20:05:43.858267   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.858278   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:43.858289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:43.858303   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:43.940756   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:43.940794   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:43.985878   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:43.985925   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:44.040177   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:44.040207   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:44.055912   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:44.055942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:44.143724   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:46.643923   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:46.658863   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:46.658941   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:46.697826   72712 cri.go:89] found id: ""
	I0425 20:05:46.697850   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.697858   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:46.697884   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:46.697947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:46.739850   72712 cri.go:89] found id: ""
	I0425 20:05:46.739877   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.739888   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:46.739897   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:46.739955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:46.781212   72712 cri.go:89] found id: ""
	I0425 20:05:46.781241   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.781256   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:46.781262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:46.781321   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:46.826005   72712 cri.go:89] found id: ""
	I0425 20:05:46.826036   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.826047   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:46.826055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:46.826109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:46.865428   72712 cri.go:89] found id: ""
	I0425 20:05:46.865456   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.865465   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:46.865472   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:46.865522   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:46.914860   72712 cri.go:89] found id: ""
	I0425 20:05:46.914887   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.914897   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:46.914907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:46.914968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:46.955323   72712 cri.go:89] found id: ""
	I0425 20:05:46.955355   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.955365   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:46.955373   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:46.955436   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:46.999369   72712 cri.go:89] found id: ""
	I0425 20:05:46.999396   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.999408   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:46.999419   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:46.999464   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:47.013865   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:47.013893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:47.094725   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:47.094755   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:47.094771   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:47.178380   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:47.178426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:47.227217   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:47.227249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:49.780217   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:49.795690   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:49.795760   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:49.834909   72712 cri.go:89] found id: ""
	I0425 20:05:49.834935   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.834943   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:49.834951   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:49.835004   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:49.872717   72712 cri.go:89] found id: ""
	I0425 20:05:49.872747   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.872755   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:49.872762   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:49.872807   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:49.919348   72712 cri.go:89] found id: ""
	I0425 20:05:49.919376   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.919387   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:49.919395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:49.919465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:49.959673   72712 cri.go:89] found id: ""
	I0425 20:05:49.959705   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.959716   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:49.959728   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:49.959796   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:49.999276   72712 cri.go:89] found id: ""
	I0425 20:05:49.999299   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.999306   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:49.999312   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:49.999361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:50.037426   72712 cri.go:89] found id: ""
	I0425 20:05:50.037454   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.037461   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:50.037466   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:50.037510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:50.080666   72712 cri.go:89] found id: ""
	I0425 20:05:50.080695   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.080703   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:50.080719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:50.080776   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:50.126065   72712 cri.go:89] found id: ""
	I0425 20:05:50.126111   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.126123   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:50.126134   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:50.126148   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:50.140778   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:50.140805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:50.213282   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:50.213308   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:50.213320   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:50.293798   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:50.293832   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:50.336823   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:50.336859   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:52.892579   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:52.909556   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:52.909629   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:52.948098   72712 cri.go:89] found id: ""
	I0425 20:05:52.948127   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.948138   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:52.948146   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:52.948206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:52.988813   72712 cri.go:89] found id: ""
	I0425 20:05:52.988840   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.988848   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:52.988853   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:52.988898   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:53.032181   72712 cri.go:89] found id: ""
	I0425 20:05:53.032211   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.032222   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:53.032230   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:53.032288   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:53.075496   72712 cri.go:89] found id: ""
	I0425 20:05:53.075528   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.075538   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:53.075543   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:53.075599   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:53.119037   72712 cri.go:89] found id: ""
	I0425 20:05:53.119070   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.119082   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:53.119095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:53.119158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:53.158276   72712 cri.go:89] found id: ""
	I0425 20:05:53.158303   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.158314   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:53.158321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:53.158381   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:53.196168   72712 cri.go:89] found id: ""
	I0425 20:05:53.196199   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.196211   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:53.196219   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:53.196277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:53.235212   72712 cri.go:89] found id: ""
	I0425 20:05:53.235235   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.235243   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:53.235250   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:53.235261   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:53.290435   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:53.290474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:53.306351   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:53.306380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:53.388623   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:53.388652   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:53.388666   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:53.480388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:53.480426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:56.027403   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:56.042683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:56.042755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:56.083672   72712 cri.go:89] found id: ""
	I0425 20:05:56.083706   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.083718   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:56.083725   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:56.083790   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:56.124071   72712 cri.go:89] found id: ""
	I0425 20:05:56.124105   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.124126   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:56.124134   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:56.124200   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:56.166692   72712 cri.go:89] found id: ""
	I0425 20:05:56.166724   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.166737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:56.166744   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:56.166808   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:56.203833   72712 cri.go:89] found id: ""
	I0425 20:05:56.203871   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.203884   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:56.203892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:56.203950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:56.242277   72712 cri.go:89] found id: ""
	I0425 20:05:56.242319   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.242341   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:56.242349   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:56.242416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:56.281697   72712 cri.go:89] found id: ""
	I0425 20:05:56.281726   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.281733   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:56.281739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:56.281812   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:56.322190   72712 cri.go:89] found id: ""
	I0425 20:05:56.322233   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.322243   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:56.322248   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:56.322310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:56.364831   72712 cri.go:89] found id: ""
	I0425 20:05:56.364853   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.364864   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:56.364875   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:56.364889   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:56.422824   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:56.422856   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:56.437619   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:56.437641   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:56.512938   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:56.512961   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:56.512977   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:56.598670   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:56.598708   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:59.150322   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:59.166883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:59.166956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:59.205086   72712 cri.go:89] found id: ""
	I0425 20:05:59.205112   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.205121   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:59.205126   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:59.205199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:59.253430   72712 cri.go:89] found id: ""
	I0425 20:05:59.253458   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.253469   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:59.253478   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:59.253539   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:59.293691   72712 cri.go:89] found id: ""
	I0425 20:05:59.293719   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.293731   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:59.293738   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:59.293801   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:59.331580   72712 cri.go:89] found id: ""
	I0425 20:05:59.331604   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.331613   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:59.331619   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:59.331663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:59.369985   72712 cri.go:89] found id: ""
	I0425 20:05:59.370012   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.370023   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:59.370031   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:59.370095   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:59.411636   72712 cri.go:89] found id: ""
	I0425 20:05:59.411662   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.411670   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:59.411676   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:59.411733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:59.454735   72712 cri.go:89] found id: ""
	I0425 20:05:59.454762   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.454774   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:59.454782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:59.454839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:59.497664   72712 cri.go:89] found id: ""
	I0425 20:05:59.497694   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.497704   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:59.497715   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:59.497731   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:59.556694   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:59.556728   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:59.572160   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:59.572187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:59.649040   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:59.649063   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:59.649083   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:59.727941   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:59.727975   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.275513   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:02.290486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:02.290557   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:02.332217   72712 cri.go:89] found id: ""
	I0425 20:06:02.332255   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.332273   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:02.332281   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:02.332357   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:02.373346   72712 cri.go:89] found id: ""
	I0425 20:06:02.373370   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.373377   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:02.373382   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:02.373439   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:02.415835   72712 cri.go:89] found id: ""
	I0425 20:06:02.415861   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.415873   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:02.415881   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:02.415939   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:02.458876   72712 cri.go:89] found id: ""
	I0425 20:06:02.458905   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.458917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:02.458926   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:02.459008   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:02.502092   72712 cri.go:89] found id: ""
	I0425 20:06:02.502127   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.502138   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:02.502146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:02.502235   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:02.546357   72712 cri.go:89] found id: ""
	I0425 20:06:02.546383   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.546393   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:02.546399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:02.546459   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:02.586842   72712 cri.go:89] found id: ""
	I0425 20:06:02.586870   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.586881   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:02.586887   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:02.586932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:02.629305   72712 cri.go:89] found id: ""
	I0425 20:06:02.629339   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.629350   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:02.629360   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:02.629374   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.676583   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:02.676626   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:02.731790   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:02.731825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:02.747473   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:02.747499   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:02.824265   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:02.824289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:02.824304   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.408968   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:05.423645   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:05.423713   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:05.467402   72712 cri.go:89] found id: ""
	I0425 20:06:05.467425   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.467434   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:05.467445   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:05.467510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:05.503131   72712 cri.go:89] found id: ""
	I0425 20:06:05.503153   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.503161   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:05.503166   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:05.503216   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:05.545694   72712 cri.go:89] found id: ""
	I0425 20:06:05.545721   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.545732   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:05.545739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:05.545804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:05.585879   72712 cri.go:89] found id: ""
	I0425 20:06:05.585905   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.585912   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:05.585917   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:05.585963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:05.625520   72712 cri.go:89] found id: ""
	I0425 20:06:05.625549   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.625560   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:05.625567   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:05.625620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:05.664306   72712 cri.go:89] found id: ""
	I0425 20:06:05.664335   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.664345   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:05.664364   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:05.664437   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:05.705353   72712 cri.go:89] found id: ""
	I0425 20:06:05.705385   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.705397   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:05.705405   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:05.705468   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:05.743935   72712 cri.go:89] found id: ""
	I0425 20:06:05.743968   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.743977   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:05.743986   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:05.743997   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:05.801190   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:05.801234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:05.817046   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:05.817074   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:05.899413   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:05.899443   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:05.899458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.986303   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:05.986336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:08.531748   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:08.550667   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:08.550749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:08.594062   72712 cri.go:89] found id: ""
	I0425 20:06:08.594093   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.594102   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:08.594108   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:08.594163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:08.635823   72712 cri.go:89] found id: ""
	I0425 20:06:08.635861   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.635872   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:08.635880   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:08.635944   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:08.675338   72712 cri.go:89] found id: ""
	I0425 20:06:08.675383   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.675395   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:08.675402   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:08.675463   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:08.715971   72712 cri.go:89] found id: ""
	I0425 20:06:08.716001   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.716012   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:08.716019   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:08.716088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:08.758565   72712 cri.go:89] found id: ""
	I0425 20:06:08.758597   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.758608   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:08.758616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:08.758683   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:08.800179   72712 cri.go:89] found id: ""
	I0425 20:06:08.800207   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.800218   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:08.800226   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:08.800286   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:08.854603   72712 cri.go:89] found id: ""
	I0425 20:06:08.854639   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.854651   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:08.854659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:08.854724   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:08.904115   72712 cri.go:89] found id: ""
	I0425 20:06:08.904141   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.904152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:08.904162   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:08.904177   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:08.921826   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:08.921855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:09.003667   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:09.003687   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:09.003699   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:09.086301   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:09.086346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:09.138478   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:09.138516   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:11.704402   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:11.721810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:11.721902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:11.768790   72712 cri.go:89] found id: ""
	I0425 20:06:11.768829   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.768850   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:11.768858   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:11.768928   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:11.813543   72712 cri.go:89] found id: ""
	I0425 20:06:11.813576   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.813588   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:11.813595   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:11.813654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:11.853930   72712 cri.go:89] found id: ""
	I0425 20:06:11.853962   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.853972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:11.853980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:11.854044   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:11.900808   72712 cri.go:89] found id: ""
	I0425 20:06:11.900843   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.900853   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:11.900861   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:11.900919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:11.948850   72712 cri.go:89] found id: ""
	I0425 20:06:11.948876   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.948885   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:11.948890   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:11.948945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:11.989326   72712 cri.go:89] found id: ""
	I0425 20:06:11.989356   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.989365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:11.989371   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:11.989450   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:12.033912   72712 cri.go:89] found id: ""
	I0425 20:06:12.033943   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.033954   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:12.033959   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:12.034015   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:12.076170   72712 cri.go:89] found id: ""
	I0425 20:06:12.076199   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.076209   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:12.076217   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:12.076230   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:12.124851   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:12.124881   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:12.178927   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:12.178964   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:12.194925   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:12.194952   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:12.272163   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:12.272187   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:12.272202   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:14.851400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:14.869893   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:14.869967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:14.915793   72712 cri.go:89] found id: ""
	I0425 20:06:14.915820   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.915829   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:14.915836   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:14.915896   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:14.959549   72712 cri.go:89] found id: ""
	I0425 20:06:14.959576   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.959587   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:14.959606   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:14.959672   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:15.001420   72712 cri.go:89] found id: ""
	I0425 20:06:15.001453   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.001465   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:15.001474   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:15.001552   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:15.047960   72712 cri.go:89] found id: ""
	I0425 20:06:15.047988   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.047996   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:15.048001   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:15.048049   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:15.096688   72712 cri.go:89] found id: ""
	I0425 20:06:15.096722   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.096730   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:15.096736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:15.096795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:15.142673   72712 cri.go:89] found id: ""
	I0425 20:06:15.142701   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.142712   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:15.142719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:15.142784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:15.181729   72712 cri.go:89] found id: ""
	I0425 20:06:15.181757   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.181766   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:15.181773   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:15.181820   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:15.227858   72712 cri.go:89] found id: ""
	I0425 20:06:15.227886   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.227897   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:15.227905   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:15.227917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:15.283253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:15.283293   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:15.305572   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:15.305604   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:15.439587   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:15.439615   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:15.439631   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:15.525678   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:15.525714   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:18.078788   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:18.095012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:18.095083   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:18.136753   72712 cri.go:89] found id: ""
	I0425 20:06:18.136784   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.136796   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:18.136802   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:18.136850   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:18.184584   72712 cri.go:89] found id: ""
	I0425 20:06:18.184606   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.184614   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:18.184619   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:18.184691   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:18.228201   72712 cri.go:89] found id: ""
	I0425 20:06:18.228250   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.228263   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:18.228270   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:18.228326   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:18.267756   72712 cri.go:89] found id: ""
	I0425 20:06:18.267778   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.267786   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:18.267792   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:18.267855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:18.309727   72712 cri.go:89] found id: ""
	I0425 20:06:18.309755   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.309763   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:18.309769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:18.309827   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:18.350549   72712 cri.go:89] found id: ""
	I0425 20:06:18.350580   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.350592   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:18.350599   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:18.350656   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:18.393868   72712 cri.go:89] found id: ""
	I0425 20:06:18.393891   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.393902   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:18.393910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:18.393989   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:18.435163   72712 cri.go:89] found id: ""
	I0425 20:06:18.435195   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.435204   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:18.435211   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:18.435224   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:18.450871   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:18.450901   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:18.534501   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:18.534526   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:18.534538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:18.616979   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:18.617015   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:18.663568   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:18.663598   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.217744   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:21.235862   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:21.235955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:21.288966   72712 cri.go:89] found id: ""
	I0425 20:06:21.288996   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.289005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:21.289014   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:21.289075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:21.362068   72712 cri.go:89] found id: ""
	I0425 20:06:21.362092   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.362101   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:21.362108   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:21.362168   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:21.416870   72712 cri.go:89] found id: ""
	I0425 20:06:21.416894   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.416901   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:21.416907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:21.416956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:21.461465   72712 cri.go:89] found id: ""
	I0425 20:06:21.461495   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.461503   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:21.461508   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:21.461570   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:21.499985   72712 cri.go:89] found id: ""
	I0425 20:06:21.500014   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.500025   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:21.500032   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:21.500081   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:21.543725   72712 cri.go:89] found id: ""
	I0425 20:06:21.543764   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.543776   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:21.543784   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:21.543841   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:21.586535   72712 cri.go:89] found id: ""
	I0425 20:06:21.586566   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.586578   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:21.586587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:21.586644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:21.627885   72712 cri.go:89] found id: ""
	I0425 20:06:21.627912   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.627921   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:21.627929   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:21.627942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.685973   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:21.686006   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:21.702529   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:21.702556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:21.781634   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:21.781660   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:21.781673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:21.862986   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:21.863027   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:24.413547   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:24.428247   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:24.428323   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:24.468708   72712 cri.go:89] found id: ""
	I0425 20:06:24.468757   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.468768   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:24.468775   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:24.468836   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:24.507667   72712 cri.go:89] found id: ""
	I0425 20:06:24.507694   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.507702   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:24.507708   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:24.507769   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:24.548537   72712 cri.go:89] found id: ""
	I0425 20:06:24.548562   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.548570   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:24.548576   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:24.548625   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:24.591240   72712 cri.go:89] found id: ""
	I0425 20:06:24.591264   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.591272   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:24.591280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:24.591325   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:24.631530   72712 cri.go:89] found id: ""
	I0425 20:06:24.631557   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.631568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:24.631575   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:24.631642   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:24.672878   72712 cri.go:89] found id: ""
	I0425 20:06:24.672903   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.672911   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:24.672916   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:24.672960   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:24.716168   72712 cri.go:89] found id: ""
	I0425 20:06:24.716193   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.716201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:24.716206   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:24.716256   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:24.758061   72712 cri.go:89] found id: ""
	I0425 20:06:24.758098   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.758110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:24.758122   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:24.758135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:24.839866   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:24.839900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:24.889288   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:24.889380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:24.946445   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:24.946488   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:24.963093   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:24.963126   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:25.044921   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:27.545838   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:27.562659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:27.562717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:27.606462   72712 cri.go:89] found id: ""
	I0425 20:06:27.606491   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.606501   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:27.606509   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:27.606567   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:27.650475   72712 cri.go:89] found id: ""
	I0425 20:06:27.650505   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.650517   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:27.650524   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:27.650583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:27.695163   72712 cri.go:89] found id: ""
	I0425 20:06:27.695190   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.695201   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:27.695208   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:27.695265   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:27.741798   72712 cri.go:89] found id: ""
	I0425 20:06:27.741832   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.741842   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:27.741849   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:27.741904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:27.784146   72712 cri.go:89] found id: ""
	I0425 20:06:27.784175   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.784185   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:27.784193   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:27.784253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:27.827179   72712 cri.go:89] found id: ""
	I0425 20:06:27.827213   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.827225   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:27.827234   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:27.827298   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:27.872941   72712 cri.go:89] found id: ""
	I0425 20:06:27.872961   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.872980   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:27.872985   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:27.873040   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:27.917920   72712 cri.go:89] found id: ""
	I0425 20:06:27.917949   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.917959   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:27.917970   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:27.917985   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:27.971411   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:27.971455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:27.988704   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:27.988743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:28.064208   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:28.064229   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:28.064242   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:28.147388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:28.147427   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.694349   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:30.708595   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:30.708671   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:30.752963   72712 cri.go:89] found id: ""
	I0425 20:06:30.752994   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.753005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:30.753012   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:30.753073   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:30.795453   72712 cri.go:89] found id: ""
	I0425 20:06:30.795488   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.795498   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:30.795507   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:30.795574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:30.838945   72712 cri.go:89] found id: ""
	I0425 20:06:30.838970   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.838978   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:30.838984   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:30.839042   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:30.886128   72712 cri.go:89] found id: ""
	I0425 20:06:30.886160   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.886170   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:30.886178   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:30.886255   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:30.927773   72712 cri.go:89] found id: ""
	I0425 20:06:30.927805   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.927819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:30.927827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:30.927893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:30.968628   72712 cri.go:89] found id: ""
	I0425 20:06:30.968660   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.968672   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:30.968680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:30.968743   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:31.014590   72712 cri.go:89] found id: ""
	I0425 20:06:31.014616   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.014627   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:31.014634   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:31.014697   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:31.053236   72712 cri.go:89] found id: ""
	I0425 20:06:31.053262   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.053274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:31.053285   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:31.053301   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:31.107797   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:31.107834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:31.123675   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:31.123702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:31.201180   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:31.201204   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:31.201215   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:31.289474   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:31.289512   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:33.840828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:33.857736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:33.857795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:33.898621   72712 cri.go:89] found id: ""
	I0425 20:06:33.898647   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.898658   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:33.898665   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:33.898727   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:33.939211   72712 cri.go:89] found id: ""
	I0425 20:06:33.939234   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.939245   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:33.939250   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:33.939305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:33.981872   72712 cri.go:89] found id: ""
	I0425 20:06:33.981896   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.981903   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:33.981909   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:33.981965   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:34.027570   72712 cri.go:89] found id: ""
	I0425 20:06:34.027597   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.027609   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:34.027617   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:34.027675   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:34.072544   72712 cri.go:89] found id: ""
	I0425 20:06:34.072570   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.072586   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:34.072594   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:34.072674   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:34.119326   72712 cri.go:89] found id: ""
	I0425 20:06:34.119349   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.119358   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:34.119366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:34.119423   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:34.169618   72712 cri.go:89] found id: ""
	I0425 20:06:34.169642   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.169650   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:34.169655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:34.169705   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:34.213570   72712 cri.go:89] found id: ""
	I0425 20:06:34.213593   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.213601   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:34.213609   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:34.213621   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:34.255722   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:34.255756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:34.311113   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:34.311147   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:34.326869   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:34.326897   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:34.399765   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:34.399788   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:34.399801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:36.986610   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:37.003090   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:37.003163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:37.045929   72712 cri.go:89] found id: ""
	I0425 20:06:37.045956   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.045964   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:37.045969   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:37.046022   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:37.086835   72712 cri.go:89] found id: ""
	I0425 20:06:37.086868   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.086879   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:37.086885   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:37.086937   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:37.127454   72712 cri.go:89] found id: ""
	I0425 20:06:37.127479   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.127488   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:37.127494   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:37.127551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:37.168878   72712 cri.go:89] found id: ""
	I0425 20:06:37.168904   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.168917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:37.168924   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:37.168986   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:37.208859   72712 cri.go:89] found id: ""
	I0425 20:06:37.208889   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.208901   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:37.208914   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:37.208970   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:37.250407   72712 cri.go:89] found id: ""
	I0425 20:06:37.250439   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.250452   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:37.250467   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:37.250536   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:37.291004   72712 cri.go:89] found id: ""
	I0425 20:06:37.291040   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.291054   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:37.291063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:37.291125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:37.335573   72712 cri.go:89] found id: ""
	I0425 20:06:37.335597   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.335608   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:37.335619   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:37.335635   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:37.392773   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:37.392810   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:37.408311   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:37.408343   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:37.491376   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:37.491402   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:37.491416   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:37.574559   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:37.574600   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.125241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:40.142254   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:40.142347   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:40.186859   72712 cri.go:89] found id: ""
	I0425 20:06:40.186893   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.186904   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:40.186911   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:40.186972   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:40.229247   72712 cri.go:89] found id: ""
	I0425 20:06:40.229275   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.229288   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:40.229295   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:40.229361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:40.268853   72712 cri.go:89] found id: ""
	I0425 20:06:40.268879   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.268890   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:40.268897   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:40.268959   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:40.307621   72712 cri.go:89] found id: ""
	I0425 20:06:40.307650   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.307669   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:40.307677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:40.307732   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:40.351448   72712 cri.go:89] found id: ""
	I0425 20:06:40.351472   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.351484   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:40.351492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:40.351548   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:40.396771   72712 cri.go:89] found id: ""
	I0425 20:06:40.396804   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.396815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:40.396824   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:40.396890   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:40.443605   72712 cri.go:89] found id: ""
	I0425 20:06:40.443634   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.443642   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:40.443647   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:40.443694   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:40.495496   72712 cri.go:89] found id: ""
	I0425 20:06:40.495525   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.495536   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:40.495548   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:40.495563   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.539428   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:40.539457   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:40.596259   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:40.596305   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:40.613140   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:40.613167   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:40.701768   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:40.701793   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:40.701805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:43.294502   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:43.310041   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:43.310113   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:43.351841   72712 cri.go:89] found id: ""
	I0425 20:06:43.351864   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.351872   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:43.351877   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:43.351924   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:43.395467   72712 cri.go:89] found id: ""
	I0425 20:06:43.395497   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.395509   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:43.395516   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:43.395576   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:43.437256   72712 cri.go:89] found id: ""
	I0425 20:06:43.437354   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.437375   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:43.437384   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:43.437465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:43.480744   72712 cri.go:89] found id: ""
	I0425 20:06:43.480772   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.480783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:43.480791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:43.480839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:43.519916   72712 cri.go:89] found id: ""
	I0425 20:06:43.519951   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.519961   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:43.519975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:43.520039   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:43.557861   72712 cri.go:89] found id: ""
	I0425 20:06:43.557890   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.557901   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:43.557910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:43.557968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:43.594423   72712 cri.go:89] found id: ""
	I0425 20:06:43.594449   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.594458   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:43.594464   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:43.594512   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:43.632227   72712 cri.go:89] found id: ""
	I0425 20:06:43.632253   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.632262   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:43.632270   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:43.632281   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:43.688307   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:43.688336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:43.703382   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:43.703407   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:43.782073   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:43.782093   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:43.782109   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:43.872811   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:43.872842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:46.420420   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:46.435110   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:46.435174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:46.474019   72712 cri.go:89] found id: ""
	I0425 20:06:46.474044   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.474054   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:46.474067   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:46.474125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:46.517053   72712 cri.go:89] found id: ""
	I0425 20:06:46.517078   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.517088   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:46.517096   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:46.517150   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:46.560934   72712 cri.go:89] found id: ""
	I0425 20:06:46.560963   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.560972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:46.560977   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:46.561030   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:46.605969   72712 cri.go:89] found id: ""
	I0425 20:06:46.605997   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.606007   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:46.606012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:46.606061   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:46.647025   72712 cri.go:89] found id: ""
	I0425 20:06:46.647049   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.647058   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:46.647063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:46.647118   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:46.686931   72712 cri.go:89] found id: ""
	I0425 20:06:46.686956   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.686966   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:46.686975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:46.687053   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:46.727183   72712 cri.go:89] found id: ""
	I0425 20:06:46.727207   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.727216   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:46.727224   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:46.727277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:46.768030   72712 cri.go:89] found id: ""
	I0425 20:06:46.768059   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.768073   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:46.768085   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:46.768105   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:46.823400   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:46.823439   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:46.838443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:46.838468   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:46.919509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:46.919527   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:46.919538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:46.996250   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:46.996284   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:49.542696   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:49.557346   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:49.557444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:49.595195   72712 cri.go:89] found id: ""
	I0425 20:06:49.595220   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.595231   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:49.595238   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:49.595305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:49.641324   72712 cri.go:89] found id: ""
	I0425 20:06:49.641354   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.641365   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:49.641373   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:49.641426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:49.681510   72712 cri.go:89] found id: ""
	I0425 20:06:49.681540   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.681552   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:49.681559   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:49.681620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:49.721482   72712 cri.go:89] found id: ""
	I0425 20:06:49.721509   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.721518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:49.721525   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:49.721581   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:49.762682   72712 cri.go:89] found id: ""
	I0425 20:06:49.762710   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.762723   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:49.762731   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:49.762793   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:49.801892   72712 cri.go:89] found id: ""
	I0425 20:06:49.801920   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.801932   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:49.801943   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:49.802002   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:49.840347   72712 cri.go:89] found id: ""
	I0425 20:06:49.840376   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.840387   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:49.840395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:49.840458   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:49.898486   72712 cri.go:89] found id: ""
	I0425 20:06:49.898516   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.898527   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:49.898536   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:49.898547   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:49.952735   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:49.952775   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:49.967986   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:49.968018   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:50.048003   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:50.048024   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:50.048040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:50.126062   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:50.126098   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:52.679721   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:52.695636   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:52.695700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:52.738329   72712 cri.go:89] found id: ""
	I0425 20:06:52.738359   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.738368   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:52.738374   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:52.738420   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:52.779388   72712 cri.go:89] found id: ""
	I0425 20:06:52.779418   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.779426   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:52.779433   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:52.779496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:52.821105   72712 cri.go:89] found id: ""
	I0425 20:06:52.821137   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.821149   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:52.821168   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:52.821231   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:52.861781   72712 cri.go:89] found id: ""
	I0425 20:06:52.861817   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.861825   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:52.861831   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:52.861885   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:52.904602   72712 cri.go:89] found id: ""
	I0425 20:06:52.904633   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.904644   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:52.904651   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:52.904712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:52.951137   72712 cri.go:89] found id: ""
	I0425 20:06:52.951174   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.951183   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:52.951188   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:52.951234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:52.994199   72712 cri.go:89] found id: ""
	I0425 20:06:52.994249   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.994257   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:52.994262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:52.994315   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:53.031997   72712 cri.go:89] found id: ""
	I0425 20:06:53.032020   72712 logs.go:276] 0 containers: []
	W0425 20:06:53.032027   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:53.032035   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:53.032046   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:53.111351   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:53.111383   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:53.162470   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:53.162504   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:53.217188   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:53.217223   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:53.233071   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:53.233100   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:53.308983   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:55.809162   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:55.825185   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:55.825259   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:55.865963   72712 cri.go:89] found id: ""
	I0425 20:06:55.865989   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.866001   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:55.866009   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:55.866060   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:55.920565   72712 cri.go:89] found id: ""
	I0425 20:06:55.920601   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.920612   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:55.920620   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:55.920677   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:55.962643   72712 cri.go:89] found id: ""
	I0425 20:06:55.962669   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.962677   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:55.962684   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:55.962738   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:56.000737   72712 cri.go:89] found id: ""
	I0425 20:06:56.000764   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.000773   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:56.000782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:56.000828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:56.042226   72712 cri.go:89] found id: ""
	I0425 20:06:56.042251   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.042259   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:56.042265   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:56.042316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:56.080765   72712 cri.go:89] found id: ""
	I0425 20:06:56.080788   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.080798   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:56.080810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:56.080869   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:56.119563   72712 cri.go:89] found id: ""
	I0425 20:06:56.119590   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.119602   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:56.119608   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:56.119667   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:56.160136   72712 cri.go:89] found id: ""
	I0425 20:06:56.160162   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.160170   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:56.160179   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:56.160193   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:56.213506   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:56.213539   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:56.232121   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:56.232150   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:56.336606   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:56.336629   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:56.336640   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:56.426867   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:56.426908   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:58.975395   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:58.991064   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:58.991125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:59.031157   72712 cri.go:89] found id: ""
	I0425 20:06:59.031179   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.031190   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:59.031197   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:59.031253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:59.071893   72712 cri.go:89] found id: ""
	I0425 20:06:59.071923   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.071931   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:59.071937   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:59.071998   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:59.114714   72712 cri.go:89] found id: ""
	I0425 20:06:59.114749   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.114760   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:59.114768   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:59.114840   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:59.159482   72712 cri.go:89] found id: ""
	I0425 20:06:59.159510   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.159518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:59.159523   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:59.159575   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:59.201218   72712 cri.go:89] found id: ""
	I0425 20:06:59.201245   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.201253   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:59.201263   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:59.201312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:59.247277   72712 cri.go:89] found id: ""
	I0425 20:06:59.247305   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.247316   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:59.247324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:59.247379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:59.286713   72712 cri.go:89] found id: ""
	I0425 20:06:59.286738   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.286746   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:59.286751   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:59.286804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:59.332263   72712 cri.go:89] found id: ""
	I0425 20:06:59.332296   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.332320   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:59.332332   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:59.332346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:59.416446   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:59.416477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:59.462125   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:59.462166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:59.514881   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:59.514907   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:59.530109   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:59.530134   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:59.605820   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.106478   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:02.124859   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:02.124934   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:02.180491   72712 cri.go:89] found id: ""
	I0425 20:07:02.180526   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.180537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:02.180545   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:02.180601   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:02.237075   72712 cri.go:89] found id: ""
	I0425 20:07:02.237104   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.237118   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:02.237126   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:02.237190   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:02.295104   72712 cri.go:89] found id: ""
	I0425 20:07:02.295129   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.295140   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:02.295148   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:02.295210   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:02.335392   72712 cri.go:89] found id: ""
	I0425 20:07:02.335418   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.335428   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:02.335435   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:02.335496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:02.376964   72712 cri.go:89] found id: ""
	I0425 20:07:02.376990   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.377002   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:02.377009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:02.377066   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:02.415460   72712 cri.go:89] found id: ""
	I0425 20:07:02.415484   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.415491   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:02.415496   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:02.415550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:02.461946   72712 cri.go:89] found id: ""
	I0425 20:07:02.461972   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.461993   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:02.462009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:02.462075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:02.502829   72712 cri.go:89] found id: ""
	I0425 20:07:02.502851   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.502858   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:02.502866   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:02.502878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:02.558264   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:02.558296   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:02.574175   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:02.574225   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:02.649363   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.649389   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:02.649404   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:02.730528   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:02.730560   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.276648   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:05.292055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:05.292121   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:05.332849   72712 cri.go:89] found id: ""
	I0425 20:07:05.332874   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.332884   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:05.332892   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:05.332954   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:05.376446   72712 cri.go:89] found id: ""
	I0425 20:07:05.376475   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.376487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:05.376494   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:05.376556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:05.418635   72712 cri.go:89] found id: ""
	I0425 20:07:05.418664   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.418675   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:05.418682   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:05.418745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:05.459082   72712 cri.go:89] found id: ""
	I0425 20:07:05.459113   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.459123   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:05.459128   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:05.459175   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:05.498473   72712 cri.go:89] found id: ""
	I0425 20:07:05.498502   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.498514   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:05.498521   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:05.498578   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:05.543121   72712 cri.go:89] found id: ""
	I0425 20:07:05.543150   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.543159   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:05.543164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:05.543211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:05.585722   72712 cri.go:89] found id: ""
	I0425 20:07:05.585748   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.585758   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:05.585766   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:05.585826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:05.629614   72712 cri.go:89] found id: ""
	I0425 20:07:05.629647   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.629661   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:05.629671   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:05.629685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:05.683974   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:05.684007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:05.700651   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:05.700685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:05.782097   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:05.782127   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:05.782142   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:05.863881   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:05.863918   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:08.412898   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:08.428152   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:08.428206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:08.468403   72712 cri.go:89] found id: ""
	I0425 20:07:08.468441   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.468455   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:08.468464   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:08.468529   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:08.511246   72712 cri.go:89] found id: ""
	I0425 20:07:08.511285   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.511297   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:08.511304   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:08.511363   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:08.553121   72712 cri.go:89] found id: ""
	I0425 20:07:08.553148   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.553155   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:08.553161   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:08.553214   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:08.589723   72712 cri.go:89] found id: ""
	I0425 20:07:08.589745   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.589755   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:08.589762   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:08.589826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:08.629502   72712 cri.go:89] found id: ""
	I0425 20:07:08.629525   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.629533   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:08.629538   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:08.629591   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:08.677107   72712 cri.go:89] found id: ""
	I0425 20:07:08.677144   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.677153   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:08.677164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:08.677212   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:08.716501   72712 cri.go:89] found id: ""
	I0425 20:07:08.716531   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.716542   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:08.716550   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:08.716611   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:08.763473   72712 cri.go:89] found id: ""
	I0425 20:07:08.763503   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.763515   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:08.763526   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:08.763543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:08.848961   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:08.848985   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:08.849000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:08.945851   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:08.945890   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:08.989429   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:08.989460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:09.042721   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:09.042756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.559400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:11.575100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:11.575180   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:11.613246   72712 cri.go:89] found id: ""
	I0425 20:07:11.613271   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.613284   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:11.613290   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:11.613351   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:11.655158   72712 cri.go:89] found id: ""
	I0425 20:07:11.655189   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.655200   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:11.655208   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:11.655266   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:11.695122   72712 cri.go:89] found id: ""
	I0425 20:07:11.695144   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.695151   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:11.695156   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:11.695205   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:11.735578   72712 cri.go:89] found id: ""
	I0425 20:07:11.735604   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.735615   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:11.735621   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:11.735680   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:11.774750   72712 cri.go:89] found id: ""
	I0425 20:07:11.774785   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.774795   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:11.774803   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:11.774855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:11.814878   72712 cri.go:89] found id: ""
	I0425 20:07:11.814908   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.814920   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:11.814939   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:11.815000   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:11.853262   72712 cri.go:89] found id: ""
	I0425 20:07:11.853295   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.853306   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:11.853313   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:11.853379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:11.897291   72712 cri.go:89] found id: ""
	I0425 20:07:11.897314   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.897324   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:11.897333   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:11.897348   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:11.956913   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:11.956945   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.973787   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:11.973821   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:12.055801   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:12.055826   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:12.055842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:12.140238   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:12.140270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:14.685296   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:14.699655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:14.699740   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:14.741907   72712 cri.go:89] found id: ""
	I0425 20:07:14.741936   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.741947   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:14.741955   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:14.742017   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:14.786457   72712 cri.go:89] found id: ""
	I0425 20:07:14.786479   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.786487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:14.786493   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:14.786537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:14.825010   72712 cri.go:89] found id: ""
	I0425 20:07:14.825042   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.825055   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:14.825063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:14.825124   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:14.874834   72712 cri.go:89] found id: ""
	I0425 20:07:14.874856   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.874867   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:14.874875   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:14.874933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:14.914636   72712 cri.go:89] found id: ""
	I0425 20:07:14.914674   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.914685   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:14.914693   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:14.914752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:14.959327   72712 cri.go:89] found id: ""
	I0425 20:07:14.959356   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.959365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:14.959372   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:14.959425   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:15.000637   72712 cri.go:89] found id: ""
	I0425 20:07:15.000666   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.000674   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:15.000680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:15.000728   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:15.040497   72712 cri.go:89] found id: ""
	I0425 20:07:15.040523   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.040531   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:15.040539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:15.040550   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:15.120206   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:15.120240   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:15.168292   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:15.168324   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:15.222133   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:15.222164   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:15.237719   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:15.237746   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:15.323404   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:17.823552   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:17.838837   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:17.838911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:17.880547   72712 cri.go:89] found id: ""
	I0425 20:07:17.880584   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.880595   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:17.880608   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:17.880669   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:17.929700   72712 cri.go:89] found id: ""
	I0425 20:07:17.929730   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.929742   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:17.929797   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:17.929861   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:17.974057   72712 cri.go:89] found id: ""
	I0425 20:07:17.974081   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.974088   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:17.974094   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:17.974142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:18.013173   72712 cri.go:89] found id: ""
	I0425 20:07:18.013200   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.013209   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:18.013215   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:18.013267   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:18.053525   72712 cri.go:89] found id: ""
	I0425 20:07:18.053557   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.053568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:18.053580   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:18.053644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:18.095972   72712 cri.go:89] found id: ""
	I0425 20:07:18.096004   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.096016   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:18.096024   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:18.096089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:18.136792   72712 cri.go:89] found id: ""
	I0425 20:07:18.136823   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.136834   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:18.136842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:18.136904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:18.176562   72712 cri.go:89] found id: ""
	I0425 20:07:18.176594   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.176605   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:18.176619   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:18.176634   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:18.254402   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:18.254440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:18.298075   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:18.298112   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:18.356091   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:18.356124   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:18.373788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:18.373822   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:18.452545   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:20.952752   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:20.972054   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:20.972133   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:21.015572   72712 cri.go:89] found id: ""
	I0425 20:07:21.015602   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.015613   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:21.015621   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:21.015689   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:21.053313   72712 cri.go:89] found id: ""
	I0425 20:07:21.053342   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.053352   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:21.053359   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:21.053422   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:21.090343   72712 cri.go:89] found id: ""
	I0425 20:07:21.090373   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.090384   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:21.090391   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:21.090472   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:21.127148   72712 cri.go:89] found id: ""
	I0425 20:07:21.127174   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.127184   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:21.127192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:21.127258   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:21.167175   72712 cri.go:89] found id: ""
	I0425 20:07:21.167199   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.167207   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:21.167212   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:21.167263   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:21.212740   72712 cri.go:89] found id: ""
	I0425 20:07:21.212771   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.212783   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:21.212791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:21.212856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:21.250751   72712 cri.go:89] found id: ""
	I0425 20:07:21.250774   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.250782   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:21.250788   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:21.250833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:21.292387   72712 cri.go:89] found id: ""
	I0425 20:07:21.292414   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.292426   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:21.292436   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:21.292451   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:21.337695   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:21.337726   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:21.395479   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:21.395520   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:21.411538   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:21.411564   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:21.493248   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:21.493270   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:21.493282   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:24.076755   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:24.093549   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:24.093624   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:24.135660   72712 cri.go:89] found id: ""
	I0425 20:07:24.135686   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.135694   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:24.135705   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:24.135784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:24.179778   72712 cri.go:89] found id: ""
	I0425 20:07:24.179799   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.179807   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:24.179824   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:24.179883   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.226745   72712 cri.go:89] found id: ""
	I0425 20:07:24.226771   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.226780   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:24.226785   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:24.226839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:24.273302   72712 cri.go:89] found id: ""
	I0425 20:07:24.273327   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.273347   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:24.273354   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:24.273421   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:24.314117   72712 cri.go:89] found id: ""
	I0425 20:07:24.314149   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.314160   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:24.314167   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:24.314247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:24.353144   72712 cri.go:89] found id: ""
	I0425 20:07:24.353173   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.353184   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:24.353192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:24.353292   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:24.395899   72712 cri.go:89] found id: ""
	I0425 20:07:24.395925   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.395933   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:24.395938   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:24.395988   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:24.444470   72712 cri.go:89] found id: ""
	I0425 20:07:24.444503   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.444514   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:24.444525   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:24.444540   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:24.499845   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:24.499876   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:24.517421   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:24.517449   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:24.596509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:24.596530   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:24.596543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:24.710844   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:24.710878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.259541   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:27.275551   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:27.275609   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:27.314610   72712 cri.go:89] found id: ""
	I0425 20:07:27.314640   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.314651   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:27.314656   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:27.314712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:27.350100   72712 cri.go:89] found id: ""
	I0425 20:07:27.350132   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.350151   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:27.350158   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:27.350226   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:27.390197   72712 cri.go:89] found id: ""
	I0425 20:07:27.390238   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.390249   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:27.390257   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:27.390312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:27.431936   72712 cri.go:89] found id: ""
	I0425 20:07:27.431961   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.431973   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:27.431980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:27.432038   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:27.469175   72712 cri.go:89] found id: ""
	I0425 20:07:27.469204   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.469212   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:27.469218   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:27.469276   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:27.509385   72712 cri.go:89] found id: ""
	I0425 20:07:27.509416   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.509428   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:27.509436   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:27.509503   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:27.548997   72712 cri.go:89] found id: ""
	I0425 20:07:27.549034   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.549045   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:27.549052   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:27.549111   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:27.588925   72712 cri.go:89] found id: ""
	I0425 20:07:27.588959   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.588973   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:27.588985   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:27.589000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.635005   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:27.635040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:27.686587   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:27.686617   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:27.702913   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:27.702942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:27.775525   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:27.775551   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:27.775562   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.352358   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:30.367016   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:30.367088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:30.410878   72712 cri.go:89] found id: ""
	I0425 20:07:30.410906   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.410917   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:30.410927   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:30.410985   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:30.456150   72712 cri.go:89] found id: ""
	I0425 20:07:30.456173   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.456181   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:30.456186   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:30.456234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:30.495409   72712 cri.go:89] found id: ""
	I0425 20:07:30.495439   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.495450   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:30.495458   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:30.495516   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:30.535863   72712 cri.go:89] found id: ""
	I0425 20:07:30.535895   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.535906   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:30.535912   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:30.535971   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:30.573772   72712 cri.go:89] found id: ""
	I0425 20:07:30.573808   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.573819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:30.573826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:30.573892   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:30.626310   72712 cri.go:89] found id: ""
	I0425 20:07:30.626350   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.626362   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:30.626376   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:30.626438   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:30.666302   72712 cri.go:89] found id: ""
	I0425 20:07:30.666332   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.666343   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:30.666350   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:30.666413   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:30.703478   72712 cri.go:89] found id: ""
	I0425 20:07:30.703507   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.703519   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:30.703529   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:30.703543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:30.756532   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:30.756566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:30.772128   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:30.772158   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:30.853701   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:30.853728   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:30.853743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.935879   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:30.935917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:33.483702   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:33.498238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:33.498310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:33.545696   72712 cri.go:89] found id: ""
	I0425 20:07:33.545723   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.545731   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:33.545737   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:33.545791   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:33.590808   72712 cri.go:89] found id: ""
	I0425 20:07:33.590837   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.590849   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:33.590857   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:33.590919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:33.634529   72712 cri.go:89] found id: ""
	I0425 20:07:33.634554   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.634562   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:33.634572   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:33.634640   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:33.679055   72712 cri.go:89] found id: ""
	I0425 20:07:33.679082   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.679093   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:33.679100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:33.679160   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:33.720653   72712 cri.go:89] found id: ""
	I0425 20:07:33.720686   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.720698   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:33.720706   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:33.720777   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:33.766163   72712 cri.go:89] found id: ""
	I0425 20:07:33.766221   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.766233   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:33.766241   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:33.766314   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:33.810804   72712 cri.go:89] found id: ""
	I0425 20:07:33.810830   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.810839   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:33.810844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:33.810908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:33.858109   72712 cri.go:89] found id: ""
	I0425 20:07:33.858140   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.858152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:33.858162   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:33.858176   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:33.926296   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:33.926333   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:33.944220   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:33.944249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:34.042119   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:34.042191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:34.042234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:34.143694   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:34.143732   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:36.691575   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:36.710408   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:36.710490   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:36.760097   72712 cri.go:89] found id: ""
	I0425 20:07:36.760135   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.760144   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:36.760150   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:36.760208   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:36.801508   72712 cri.go:89] found id: ""
	I0425 20:07:36.801532   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.801541   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:36.801546   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:36.801602   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:36.842293   72712 cri.go:89] found id: ""
	I0425 20:07:36.842328   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.842340   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:36.842355   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:36.842418   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:36.884101   72712 cri.go:89] found id: ""
	I0425 20:07:36.884131   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.884141   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:36.884149   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:36.884211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:36.925007   72712 cri.go:89] found id: ""
	I0425 20:07:36.925032   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.925039   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:36.925045   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:36.925109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:36.964975   72712 cri.go:89] found id: ""
	I0425 20:07:36.965009   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.965020   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:36.965028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:36.965088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:37.030956   72712 cri.go:89] found id: ""
	I0425 20:07:37.030987   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.030999   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:37.031007   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:37.031080   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:37.105919   72712 cri.go:89] found id: ""
	I0425 20:07:37.105946   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.105956   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:37.105967   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:37.105983   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:37.196376   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:37.196415   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:37.240296   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:37.240334   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:37.304336   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:37.304371   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:37.323146   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:37.323184   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:37.414563   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:39.915087   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:39.930987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:39.931068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:39.967641   72712 cri.go:89] found id: ""
	I0425 20:07:39.967682   72712 logs.go:276] 0 containers: []
	W0425 20:07:39.967693   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:39.967698   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:39.967755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:40.009924   72712 cri.go:89] found id: ""
	I0425 20:07:40.009951   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.009959   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:40.009969   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:40.010019   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:40.049644   72712 cri.go:89] found id: ""
	I0425 20:07:40.049675   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.049689   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:40.049697   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:40.049759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:40.090487   72712 cri.go:89] found id: ""
	I0425 20:07:40.090509   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.090519   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:40.090524   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:40.090583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:40.137634   72712 cri.go:89] found id: ""
	I0425 20:07:40.137664   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.137674   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:40.137681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:40.137745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:40.174832   72712 cri.go:89] found id: ""
	I0425 20:07:40.174863   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.174874   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:40.174882   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:40.174947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:40.212559   72712 cri.go:89] found id: ""
	I0425 20:07:40.212585   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.212593   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:40.212598   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:40.212687   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:40.253459   72712 cri.go:89] found id: ""
	I0425 20:07:40.253494   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.253506   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:40.253518   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:40.253533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:40.311253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:40.311288   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:40.326693   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:40.326722   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:40.405792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:40.405816   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:40.405831   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:40.486712   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:40.486749   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:43.037730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:43.064471   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:43.064550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:43.130075   72712 cri.go:89] found id: ""
	I0425 20:07:43.130111   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.130129   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:43.130136   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:43.130195   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:43.169628   72712 cri.go:89] found id: ""
	I0425 20:07:43.169663   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.169675   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:43.169682   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:43.169748   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:43.214845   72712 cri.go:89] found id: ""
	I0425 20:07:43.214869   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.214877   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:43.214883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:43.214929   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:43.263047   72712 cri.go:89] found id: ""
	I0425 20:07:43.263069   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.263078   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:43.263083   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:43.263142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:43.313179   72712 cri.go:89] found id: ""
	I0425 20:07:43.313213   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.313223   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:43.313231   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:43.313295   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:43.353440   72712 cri.go:89] found id: ""
	I0425 20:07:43.353468   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.353480   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:43.353488   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:43.353546   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:43.392261   72712 cri.go:89] found id: ""
	I0425 20:07:43.392288   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.392296   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:43.392321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:43.392378   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:43.431111   72712 cri.go:89] found id: ""
	I0425 20:07:43.431139   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.431147   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:43.431155   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:43.431165   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:43.485087   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:43.485120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:43.501508   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:43.501536   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:43.586041   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:43.586073   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:43.586089   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.663194   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.663232   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:46.218461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:46.233195   72712 kubeadm.go:591] duration metric: took 4m4.06065248s to restartPrimaryControlPlane
	W0425 20:07:46.233281   72712 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:46.233311   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:48.166680   72712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.933342568s)
	I0425 20:07:48.166771   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:48.185391   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:07:48.198250   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:07:48.209825   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:07:48.209843   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:07:48.209897   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:07:48.220854   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:07:48.220909   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:07:48.231518   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:07:48.241515   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:07:48.241589   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:07:48.251764   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.261762   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:07:48.261813   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.271952   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:07:48.281914   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:07:48.281986   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:07:48.292879   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:07:48.372322   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:07:48.372460   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:07:48.529730   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:07:48.529854   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:07:48.529979   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:07:48.753171   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:07:48.755473   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:07:48.755590   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:07:48.755692   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:07:48.755809   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:07:48.755905   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:07:48.756132   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:07:48.756317   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:07:48.756867   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:07:48.757498   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:07:48.758073   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:07:48.758581   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:07:48.758745   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:07:48.758842   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:07:48.894873   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:07:48.946907   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:07:49.084938   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:07:49.201925   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:07:49.219675   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:07:49.220891   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:07:49.220951   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:07:49.387310   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:07:49.389887   72712 out.go:204]   - Booting up control plane ...
	I0425 20:07:49.390011   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:07:49.395060   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:07:49.398108   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:07:49.398220   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:07:49.402596   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:08:29.403321   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:08:29.403717   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:29.404001   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:34.404410   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:34.404662   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:44.405293   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:44.405518   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:04.406406   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:04.406676   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.407969   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:44.408240   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.408259   72712 kubeadm.go:309] 
	I0425 20:09:44.408293   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:09:44.408355   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:09:44.408373   72712 kubeadm.go:309] 
	I0425 20:09:44.408417   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:09:44.408448   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:09:44.408562   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:09:44.408575   72712 kubeadm.go:309] 
	I0425 20:09:44.408655   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:09:44.408684   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:09:44.408711   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:09:44.408718   72712 kubeadm.go:309] 
	I0425 20:09:44.408812   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:09:44.408912   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:09:44.408939   72712 kubeadm.go:309] 
	I0425 20:09:44.409085   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:09:44.409217   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:09:44.409341   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:09:44.409418   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:09:44.409433   72712 kubeadm.go:309] 
	I0425 20:09:44.410319   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:09:44.410423   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:09:44.410510   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0425 20:09:44.410640   72712 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0425 20:09:44.410700   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:09:45.395830   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:09:45.412628   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:09:45.423387   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:09:45.423412   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:09:45.423465   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:09:45.434317   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:09:45.434389   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:09:45.445657   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:09:45.455698   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:09:45.455772   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:09:45.466137   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.476140   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:09:45.476192   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.486410   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:09:45.495465   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:09:45.495522   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:09:45.505410   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:09:45.726416   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:11:42.214574   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:11:42.214715   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0425 20:11:42.216323   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:11:42.216393   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:11:42.216507   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:11:42.216650   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:11:42.216795   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:11:42.216882   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:11:42.218766   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:11:42.218847   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:11:42.218923   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:11:42.219042   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:11:42.219103   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:11:42.219167   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:11:42.219237   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:11:42.219321   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:11:42.219407   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:11:42.219519   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:11:42.219639   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:11:42.219694   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:11:42.219742   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:11:42.219786   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:11:42.219831   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:11:42.219883   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:11:42.219929   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:11:42.220029   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:11:42.220139   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:11:42.220204   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:11:42.220308   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:11:42.222891   72712 out.go:204]   - Booting up control plane ...
	I0425 20:11:42.222979   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:11:42.223054   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:11:42.223129   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:11:42.223222   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:11:42.223404   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:11:42.223459   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:11:42.223565   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.223835   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.223937   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224165   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224243   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224457   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224541   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224799   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224902   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.225125   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.225134   72712 kubeadm.go:309] 
	I0425 20:11:42.225166   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:11:42.225204   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:11:42.225210   72712 kubeadm.go:309] 
	I0425 20:11:42.225239   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:11:42.225267   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:11:42.225352   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:11:42.225358   72712 kubeadm.go:309] 
	I0425 20:11:42.225446   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:11:42.225476   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:11:42.225522   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:11:42.225533   72712 kubeadm.go:309] 
	I0425 20:11:42.225626   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:11:42.225714   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:11:42.225729   72712 kubeadm.go:309] 
	I0425 20:11:42.225875   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:11:42.225951   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:11:42.226022   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:11:42.226096   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:11:42.226129   72712 kubeadm.go:309] 
	I0425 20:11:42.226162   72712 kubeadm.go:393] duration metric: took 8m0.122692927s to StartCluster
	I0425 20:11:42.226242   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:11:42.226299   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:11:42.283295   72712 cri.go:89] found id: ""
	I0425 20:11:42.283320   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.283329   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:11:42.283335   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:11:42.283389   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:11:42.322462   72712 cri.go:89] found id: ""
	I0425 20:11:42.322493   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.322505   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:11:42.322512   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:11:42.322574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:11:42.372329   72712 cri.go:89] found id: ""
	I0425 20:11:42.372355   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.372363   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:11:42.372369   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:11:42.372416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:11:42.420348   72712 cri.go:89] found id: ""
	I0425 20:11:42.420374   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.420382   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:11:42.420389   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:11:42.420447   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:11:42.460274   72712 cri.go:89] found id: ""
	I0425 20:11:42.460317   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.460329   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:11:42.460337   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:11:42.460395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:11:42.503828   72712 cri.go:89] found id: ""
	I0425 20:11:42.503855   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.503867   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:11:42.503874   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:11:42.503933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:11:42.545045   72712 cri.go:89] found id: ""
	I0425 20:11:42.545070   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.545086   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:11:42.545095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:11:42.545156   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:11:42.586389   72712 cri.go:89] found id: ""
	I0425 20:11:42.586413   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.586421   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:11:42.586429   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:11:42.586440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:11:42.602835   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:11:42.602863   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:11:42.695131   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:11:42.695153   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:11:42.695168   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:11:42.819889   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:11:42.819922   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:11:42.869446   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:11:42.869474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0425 20:11:42.927184   72712 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0425 20:11:42.927236   72712 out.go:239] * 
	* 
	W0425 20:11:42.927291   72712 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.927311   72712 out.go:239] * 
	* 
	W0425 20:11:42.928275   72712 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 20:11:42.931353   72712 out.go:177] 
	W0425 20:11:42.932654   72712 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.932696   72712 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0425 20:11:42.932713   72712 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0425 20:11:42.934227   72712 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-210442 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 2 (246.780378ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-210442 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-210442 logs -n 25: (1.682876083s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-120641 sudo cat                             | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo find                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo crio                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-120641                                      | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113000 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:54 UTC |
	|         | disable-driver-mounts-113000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-512173            | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-744552             | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142196  | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210442        | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-512173                 | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-744552                  | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142196       | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:07 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210442             | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:59:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:59:17.353932   72712 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:59:17.354045   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354055   72712 out.go:304] Setting ErrFile to fd 2...
	I0425 19:59:17.354059   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354269   72712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:59:17.354795   72712 out.go:298] Setting JSON to false
	I0425 19:59:17.355681   72712 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6103,"bootTime":1714069054,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:59:17.355740   72712 start.go:139] virtualization: kvm guest
	I0425 19:59:17.357921   72712 out.go:177] * [old-k8s-version-210442] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:59:17.359325   72712 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:59:17.360640   72712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:59:17.359305   72712 notify.go:220] Checking for updates...
	I0425 19:59:17.361801   72712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:59:17.363086   72712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:59:17.364512   72712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:59:17.365842   72712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:59:17.367508   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 19:59:17.367909   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.367946   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.382995   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0425 19:59:17.383362   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.383991   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.384016   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.384378   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.384566   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.386317   72712 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0425 19:59:17.387599   72712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:59:17.387904   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.387948   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.402999   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0425 19:59:17.403506   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.403962   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.403986   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.404318   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.404472   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.438308   72712 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:59:17.439686   72712 start.go:297] selected driver: kvm2
	I0425 19:59:17.439716   72712 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.439831   72712 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:59:17.440486   72712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.440553   72712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:59:17.454719   72712 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:59:17.455114   72712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:59:17.455184   72712 cni.go:84] Creating CNI manager for ""
	I0425 19:59:17.455203   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:59:17.455266   72712 start.go:340] cluster config:
	{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.455393   72712 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.457210   72712 out.go:177] * Starting "old-k8s-version-210442" primary control-plane node in "old-k8s-version-210442" cluster
	I0425 19:59:18.474583   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:17.458384   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 19:59:17.458418   72712 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:59:17.458430   72712 cache.go:56] Caching tarball of preloaded images
	I0425 19:59:17.458517   72712 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:59:17.458529   72712 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0425 19:59:17.458638   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 19:59:17.458844   72712 start.go:360] acquireMachinesLock for old-k8s-version-210442: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:59:24.554517   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:27.626446   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:33.706451   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:36.778527   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:42.858471   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:45.930403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:52.010482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:55.082403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:01.162466   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:04.234537   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:10.314506   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:13.386463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:19.466523   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:22.538461   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:28.622423   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:31.690489   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:37.770534   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:40.842458   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:46.922463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:49.994524   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:56.074478   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:59.146487   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:05.226452   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:08.298480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:14.378455   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:17.450469   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:23.530513   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:26.602470   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:32.682497   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:35.754500   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:41.834480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:44.906482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:50.986468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:54.058502   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:00.138459   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:03.210554   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:09.290491   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:12.362472   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:18.442476   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:21.514468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.599158   72220 start.go:364] duration metric: took 4m21.632012686s to acquireMachinesLock for "no-preload-744552"
	I0425 20:02:30.599206   72220 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:30.599212   72220 fix.go:54] fixHost starting: 
	I0425 20:02:30.599516   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:30.599545   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:30.614130   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0425 20:02:30.614502   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:30.614962   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:02:30.614979   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:30.615306   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:30.615513   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:30.615640   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:02:30.617129   72220 fix.go:112] recreateIfNeeded on no-preload-744552: state=Stopped err=<nil>
	I0425 20:02:30.617150   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	W0425 20:02:30.617300   72220 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:30.619253   72220 out.go:177] * Restarting existing kvm2 VM for "no-preload-744552" ...
	I0425 20:02:27.594454   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.596600   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:30.596654   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.596986   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:02:30.597016   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.597206   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:02:30.599042   71966 machine.go:97] duration metric: took 4m44.620242563s to provisionDockerMachine
	I0425 20:02:30.599079   71966 fix.go:56] duration metric: took 4m44.639860566s for fixHost
	I0425 20:02:30.599085   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 4m44.639890108s
	W0425 20:02:30.599104   71966 start.go:713] error starting host: provision: host is not running
	W0425 20:02:30.599182   71966 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0425 20:02:30.599192   71966 start.go:728] Will try again in 5 seconds ...
	I0425 20:02:30.620801   72220 main.go:141] libmachine: (no-preload-744552) Calling .Start
	I0425 20:02:30.620978   72220 main.go:141] libmachine: (no-preload-744552) Ensuring networks are active...
	I0425 20:02:30.621640   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network default is active
	I0425 20:02:30.621965   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network mk-no-preload-744552 is active
	I0425 20:02:30.622317   72220 main.go:141] libmachine: (no-preload-744552) Getting domain xml...
	I0425 20:02:30.623010   72220 main.go:141] libmachine: (no-preload-744552) Creating domain...
	I0425 20:02:31.809967   72220 main.go:141] libmachine: (no-preload-744552) Waiting to get IP...
	I0425 20:02:31.810856   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:31.811353   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:31.811403   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:31.811308   73381 retry.go:31] will retry after 294.641704ms: waiting for machine to come up
	I0425 20:02:32.107955   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.108508   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.108542   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.108449   73381 retry.go:31] will retry after 373.307428ms: waiting for machine to come up
	I0425 20:02:32.483111   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.483590   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.483619   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.483546   73381 retry.go:31] will retry after 484.455862ms: waiting for machine to come up
	I0425 20:02:32.969188   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.969657   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.969694   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.969602   73381 retry.go:31] will retry after 382.359725ms: waiting for machine to come up
	I0425 20:02:33.353143   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.353598   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.353621   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.353550   73381 retry.go:31] will retry after 515.389674ms: waiting for machine to come up
	I0425 20:02:35.602273   71966 start.go:360] acquireMachinesLock for embed-certs-512173: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 20:02:33.870172   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.870652   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.870676   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.870603   73381 retry.go:31] will retry after 714.032032ms: waiting for machine to come up
	I0425 20:02:34.586478   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:34.586833   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:34.586861   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:34.586791   73381 retry.go:31] will retry after 1.005122465s: waiting for machine to come up
	I0425 20:02:35.593962   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:35.594367   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:35.594400   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:35.594310   73381 retry.go:31] will retry after 1.483740326s: waiting for machine to come up
	I0425 20:02:37.079306   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:37.079751   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:37.079784   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:37.079700   73381 retry.go:31] will retry after 1.828802911s: waiting for machine to come up
	I0425 20:02:38.910631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:38.911138   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:38.911163   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:38.911086   73381 retry.go:31] will retry after 1.528405609s: waiting for machine to come up
	I0425 20:02:40.441741   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:40.442251   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:40.442277   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:40.442200   73381 retry.go:31] will retry after 2.817901976s: waiting for machine to come up
	I0425 20:02:43.263903   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:43.264376   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:43.264408   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:43.264324   73381 retry.go:31] will retry after 2.258888981s: waiting for machine to come up
	I0425 20:02:45.525701   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:45.526139   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:45.526168   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:45.526106   73381 retry.go:31] will retry after 4.008258204s: waiting for machine to come up
	I0425 20:02:50.951421   72304 start.go:364] duration metric: took 4m34.5614094s to acquireMachinesLock for "default-k8s-diff-port-142196"
	I0425 20:02:50.951491   72304 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:50.951500   72304 fix.go:54] fixHost starting: 
	I0425 20:02:50.951906   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:50.951944   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:50.968074   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33481
	I0425 20:02:50.968452   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:50.968862   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:02:50.968886   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:50.969238   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:50.969460   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:02:50.969622   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:02:50.971100   72304 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142196: state=Stopped err=<nil>
	I0425 20:02:50.971125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	W0425 20:02:50.971271   72304 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:50.974623   72304 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142196" ...
	I0425 20:02:50.975991   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Start
	I0425 20:02:50.976154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring networks are active...
	I0425 20:02:50.976794   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network default is active
	I0425 20:02:50.977111   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network mk-default-k8s-diff-port-142196 is active
	I0425 20:02:50.977490   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Getting domain xml...
	I0425 20:02:50.978200   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Creating domain...
	I0425 20:02:49.538522   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.538999   72220 main.go:141] libmachine: (no-preload-744552) Found IP for machine: 192.168.72.142
	I0425 20:02:49.539033   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has current primary IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.539043   72220 main.go:141] libmachine: (no-preload-744552) Reserving static IP address...
	I0425 20:02:49.539420   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.539458   72220 main.go:141] libmachine: (no-preload-744552) DBG | skip adding static IP to network mk-no-preload-744552 - found existing host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"}
	I0425 20:02:49.539469   72220 main.go:141] libmachine: (no-preload-744552) Reserved static IP address: 192.168.72.142
	I0425 20:02:49.539483   72220 main.go:141] libmachine: (no-preload-744552) Waiting for SSH to be available...
	I0425 20:02:49.539490   72220 main.go:141] libmachine: (no-preload-744552) DBG | Getting to WaitForSSH function...
	I0425 20:02:49.541631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542042   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.542073   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542221   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH client type: external
	I0425 20:02:49.542270   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa (-rw-------)
	I0425 20:02:49.542300   72220 main.go:141] libmachine: (no-preload-744552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:02:49.542316   72220 main.go:141] libmachine: (no-preload-744552) DBG | About to run SSH command:
	I0425 20:02:49.542334   72220 main.go:141] libmachine: (no-preload-744552) DBG | exit 0
	I0425 20:02:49.670034   72220 main.go:141] libmachine: (no-preload-744552) DBG | SSH cmd err, output: <nil>: 
	I0425 20:02:49.670414   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetConfigRaw
	I0425 20:02:49.671039   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:49.673279   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673592   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.673629   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673878   72220 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/config.json ...
	I0425 20:02:49.674066   72220 machine.go:94] provisionDockerMachine start ...
	I0425 20:02:49.674083   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:49.674317   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.676767   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677084   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.677115   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677238   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.677413   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677562   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677698   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.677841   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.678037   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.678049   72220 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:02:49.790734   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:02:49.790764   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791028   72220 buildroot.go:166] provisioning hostname "no-preload-744552"
	I0425 20:02:49.791061   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791248   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.793907   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794279   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.794313   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794450   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.794649   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794787   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794908   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.795054   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.795256   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.795277   72220 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-744552 && echo "no-preload-744552" | sudo tee /etc/hostname
	I0425 20:02:49.925459   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744552
	
	I0425 20:02:49.925483   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.928282   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928646   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.928680   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928831   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.929012   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929194   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929327   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.929481   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.929679   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.929709   72220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-744552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-744552/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-744552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:02:50.052805   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:50.052841   72220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:02:50.052861   72220 buildroot.go:174] setting up certificates
	I0425 20:02:50.052875   72220 provision.go:84] configureAuth start
	I0425 20:02:50.052887   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:50.053193   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.055800   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056145   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.056168   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056339   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.058090   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058395   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.058429   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058526   72220 provision.go:143] copyHostCerts
	I0425 20:02:50.058577   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:02:50.058587   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:02:50.058647   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:02:50.058742   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:02:50.058750   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:02:50.058774   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:02:50.058827   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:02:50.058834   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:02:50.058855   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:02:50.058904   72220 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.no-preload-744552 san=[127.0.0.1 192.168.72.142 localhost minikube no-preload-744552]
	I0425 20:02:50.247711   72220 provision.go:177] copyRemoteCerts
	I0425 20:02:50.247768   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:02:50.247792   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.250146   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250560   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.250600   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250780   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.250978   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.251128   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.251272   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.338105   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:02:50.365554   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 20:02:50.391433   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:02:50.416606   72220 provision.go:87] duration metric: took 363.720332ms to configureAuth
	I0425 20:02:50.416627   72220 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:02:50.416795   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:02:50.416876   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.419385   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419731   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.419764   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419903   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.420079   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420322   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420557   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.420724   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.420909   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.420929   72220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:02:50.702065   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:02:50.702104   72220 machine.go:97] duration metric: took 1.028026584s to provisionDockerMachine
	I0425 20:02:50.702117   72220 start.go:293] postStartSetup for "no-preload-744552" (driver="kvm2")
	I0425 20:02:50.702131   72220 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:02:50.702165   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.702531   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:02:50.702572   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.705595   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.705948   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.705992   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.706173   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.706367   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.706588   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.706759   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.794791   72220 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:02:50.799592   72220 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:02:50.799621   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:02:50.799701   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:02:50.799799   72220 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:02:50.799913   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:02:50.810796   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:02:50.836919   72220 start.go:296] duration metric: took 134.787005ms for postStartSetup
	I0425 20:02:50.836972   72220 fix.go:56] duration metric: took 20.237758066s for fixHost
	I0425 20:02:50.836995   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.839818   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840295   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.840325   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840429   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.840600   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840752   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840929   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.841079   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.841307   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.841338   72220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:02:50.951251   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075370.921171901
	
	I0425 20:02:50.951272   72220 fix.go:216] guest clock: 1714075370.921171901
	I0425 20:02:50.951279   72220 fix.go:229] Guest: 2024-04-25 20:02:50.921171901 +0000 UTC Remote: 2024-04-25 20:02:50.836976462 +0000 UTC m=+282.018789867 (delta=84.195439ms)
	I0425 20:02:50.951312   72220 fix.go:200] guest clock delta is within tolerance: 84.195439ms
	I0425 20:02:50.951321   72220 start.go:83] releasing machines lock for "no-preload-744552", held for 20.352126868s
	I0425 20:02:50.951348   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.951612   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.954231   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954614   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.954638   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954821   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955240   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955419   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955492   72220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:02:50.955540   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.955659   72220 ssh_runner.go:195] Run: cat /version.json
	I0425 20:02:50.955688   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.958155   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958476   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958517   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958541   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958661   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.958808   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.958903   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958932   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.958935   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.959045   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.959181   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.959192   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.959360   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.959471   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:51.066809   72220 ssh_runner.go:195] Run: systemctl --version
	I0425 20:02:51.073198   72220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:02:51.228547   72220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:02:51.236443   72220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:02:51.236518   72220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:02:51.256226   72220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:02:51.256244   72220 start.go:494] detecting cgroup driver to use...
	I0425 20:02:51.256307   72220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:02:51.278596   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:02:51.295692   72220 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:02:51.295751   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:02:51.310940   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:02:51.326072   72220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:02:51.459064   72220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:02:51.614563   72220 docker.go:233] disabling docker service ...
	I0425 20:02:51.614639   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:02:51.638817   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:02:51.658265   72220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:02:51.818412   72220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:02:51.943830   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:02:51.960672   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:02:51.982028   72220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:02:51.982090   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:51.994990   72220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:02:51.995079   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.007907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.020225   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.033306   72220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:02:52.046241   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.058282   72220 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.078907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.090258   72220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:02:52.100796   72220 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:02:52.100873   72220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:02:52.115600   72220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:02:52.125458   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:02:52.288142   72220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:02:52.430252   72220 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:02:52.430353   72220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:02:52.436493   72220 start.go:562] Will wait 60s for crictl version
	I0425 20:02:52.436565   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.441427   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:02:52.479709   72220 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:02:52.479810   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.512180   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.545115   72220 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:02:52.546476   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:52.549314   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549723   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:52.549759   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549926   72220 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0425 20:02:52.554924   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:02:52.568804   72220 kubeadm.go:877] updating cluster {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:02:52.568958   72220 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:02:52.568997   72220 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:02:52.609095   72220 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:02:52.609117   72220 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:02:52.609156   72220 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.609188   72220 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.609185   72220 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.609214   72220 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.609227   72220 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.609256   72220 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.609334   72220 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.609370   72220 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610726   72220 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.610747   72220 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610772   72220 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.610724   72220 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.610800   72220 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.610807   72220 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.611075   72220 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.611096   72220 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.753069   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0425 20:02:52.771762   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.825052   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908030   72220 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0425 20:02:52.908082   72220 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.908113   72220 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0425 20:02:52.908127   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.908135   72220 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908164   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.915126   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.915132   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.967834   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.969385   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.973718   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0425 20:02:52.973787   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0425 20:02:52.973823   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:52.973870   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:52.985763   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.986695   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.068153   72220 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0425 20:02:53.068196   72220 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.068269   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099237   72220 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0425 20:02:53.099257   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0425 20:02:53.099274   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099290   72220 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:53.099294   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0425 20:02:53.099330   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099368   72220 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0425 20:02:53.099401   72220 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:53.099433   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099333   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.115478   72220 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0425 20:02:53.115523   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.115526   72220 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.115610   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.550328   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.240552   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting to get IP...
	I0425 20:02:52.241327   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241657   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241757   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.241648   73527 retry.go:31] will retry after 195.006273ms: waiting for machine to come up
	I0425 20:02:52.438154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438702   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438726   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.438657   73527 retry.go:31] will retry after 365.911905ms: waiting for machine to come up
	I0425 20:02:52.806281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806793   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806826   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.806727   73527 retry.go:31] will retry after 448.572137ms: waiting for machine to come up
	I0425 20:02:53.257396   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257935   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257966   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.257889   73527 retry.go:31] will retry after 560.886917ms: waiting for machine to come up
	I0425 20:02:53.820527   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820954   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820979   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.820915   73527 retry.go:31] will retry after 514.294303ms: waiting for machine to come up
	I0425 20:02:54.336706   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337129   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:54.337101   73527 retry.go:31] will retry after 853.040726ms: waiting for machine to come up
	I0425 20:02:55.192349   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192857   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:55.192774   73527 retry.go:31] will retry after 1.17554782s: waiting for machine to come up
	I0425 20:02:56.232794   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.133436829s)
	I0425 20:02:56.232845   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0425 20:02:56.232854   72220 ssh_runner.go:235] Completed: which crictl: (3.133373607s)
	I0425 20:02:56.232875   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232915   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232961   72220 ssh_runner.go:235] Completed: which crictl: (3.133515676s)
	I0425 20:02:56.232919   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:56.233011   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:56.233050   72220 ssh_runner.go:235] Completed: which crictl: (3.11742497s)
	I0425 20:02:56.233089   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:56.233126   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (3.117580594s)
	I0425 20:02:56.233160   72220 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.6828061s)
	I0425 20:02:56.233167   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0425 20:02:56.233207   72220 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0425 20:02:56.233242   72220 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:56.233248   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:56.233284   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:56.323764   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0425 20:02:56.323884   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:02:56.323906   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0425 20:02:56.323989   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:02:58.553707   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.320762887s)
	I0425 20:02:58.553742   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0425 20:02:58.553768   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.320739179s)
	I0425 20:02:58.553784   72220 ssh_runner.go:235] Completed: which crictl: (2.320487912s)
	I0425 20:02:58.553807   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0425 20:02:58.553838   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:58.553864   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.320587538s)
	I0425 20:02:58.553889   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:02:58.553909   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0425 20:02:58.553948   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.229944417s)
	I0425 20:02:58.553959   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553989   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0425 20:02:58.554009   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553910   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.23000183s)
	I0425 20:02:58.554069   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0425 20:02:58.602692   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0425 20:02:58.602694   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0425 20:02:58.602819   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:02:56.369693   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370169   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:56.370115   73527 retry.go:31] will retry after 1.260629487s: waiting for machine to come up
	I0425 20:02:57.632705   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633187   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:57.633150   73527 retry.go:31] will retry after 1.291948113s: waiting for machine to come up
	I0425 20:02:58.926675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927167   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927196   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:58.927111   73527 retry.go:31] will retry after 1.869565597s: waiting for machine to come up
	I0425 20:03:00.799357   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799820   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799850   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:00.799750   73527 retry.go:31] will retry after 2.157801293s: waiting for machine to come up
	I0425 20:03:00.027830   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.473790165s)
	I0425 20:03:00.027869   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0425 20:03:00.027895   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027943   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027842   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.424998268s)
	I0425 20:03:00.027985   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0425 20:03:02.204218   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.176247608s)
	I0425 20:03:02.204254   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0425 20:03:02.204290   72220 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.204335   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.959407   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959789   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959812   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:02.959745   73527 retry.go:31] will retry after 2.617480271s: waiting for machine to come up
	I0425 20:03:05.579300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:05.579775   73527 retry.go:31] will retry after 4.058370199s: waiting for machine to come up
	I0425 20:03:06.132743   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.928385447s)
	I0425 20:03:06.132779   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0425 20:03:06.132805   72220 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:06.132857   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:08.314803   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.181910584s)
	I0425 20:03:08.314842   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0425 20:03:08.314881   72220 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:08.314930   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:11.255486   72712 start.go:364] duration metric: took 3m53.796595105s to acquireMachinesLock for "old-k8s-version-210442"
	I0425 20:03:11.255550   72712 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:11.255569   72712 fix.go:54] fixHost starting: 
	I0425 20:03:11.256083   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:11.256128   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:11.272950   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0425 20:03:11.273365   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:11.273878   72712 main.go:141] libmachine: Using API Version  1
	I0425 20:03:11.273907   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:11.274277   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:11.274487   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:11.274666   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetState
	I0425 20:03:11.276420   72712 fix.go:112] recreateIfNeeded on old-k8s-version-210442: state=Stopped err=<nil>
	I0425 20:03:11.276454   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	W0425 20:03:11.276608   72712 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:11.279156   72712 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210442" ...
	I0425 20:03:09.639300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639833   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Found IP for machine: 192.168.39.123
	I0425 20:03:09.639867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has current primary IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserving static IP address...
	I0425 20:03:09.640257   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.640281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | skip adding static IP to network mk-default-k8s-diff-port-142196 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"}
	I0425 20:03:09.640300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserved static IP address: 192.168.39.123
	I0425 20:03:09.640313   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for SSH to be available...
	I0425 20:03:09.640321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Getting to WaitForSSH function...
	I0425 20:03:09.643058   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643371   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.643400   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643506   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH client type: external
	I0425 20:03:09.643557   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa (-rw-------)
	I0425 20:03:09.643586   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:09.643609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | About to run SSH command:
	I0425 20:03:09.643618   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | exit 0
	I0425 20:03:09.766707   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:09.767091   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetConfigRaw
	I0425 20:03:09.767818   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:09.770573   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771012   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.771047   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771296   72304 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/config.json ...
	I0425 20:03:09.771580   72304 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:09.771609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:09.771884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.774255   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.774699   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774866   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.775044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775213   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775362   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.775520   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.775781   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.775797   72304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:09.884259   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:09.884288   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884519   72304 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142196"
	I0425 20:03:09.884547   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884747   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.887391   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.887798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.887829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.888003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.888215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888542   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.888703   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.888918   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.888934   72304 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142196 && echo "default-k8s-diff-port-142196" | sudo tee /etc/hostname
	I0425 20:03:10.015919   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142196
	
	I0425 20:03:10.015951   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.018640   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.018955   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.018987   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.019201   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.019398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019729   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.019906   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.020098   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.020120   72304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142196' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142196/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142196' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:10.145789   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:10.145822   72304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:10.145873   72304 buildroot.go:174] setting up certificates
	I0425 20:03:10.145886   72304 provision.go:84] configureAuth start
	I0425 20:03:10.145899   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:10.146185   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:10.148943   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149309   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.149342   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149492   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.152000   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152418   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.152445   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152621   72304 provision.go:143] copyHostCerts
	I0425 20:03:10.152681   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:10.152693   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:10.152758   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:10.152890   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:10.152905   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:10.152940   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:10.153033   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:10.153044   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:10.153072   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:10.153145   72304 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142196 san=[127.0.0.1 192.168.39.123 default-k8s-diff-port-142196 localhost minikube]
	I0425 20:03:10.572412   72304 provision.go:177] copyRemoteCerts
	I0425 20:03:10.572473   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:10.572496   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.575083   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.575421   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.575696   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.575799   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.575916   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:10.657850   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:10.685493   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0425 20:03:10.713230   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:10.740577   72304 provision.go:87] duration metric: took 594.674196ms to configureAuth
	I0425 20:03:10.740604   72304 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:10.740835   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:10.740916   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.743709   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744039   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.744071   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744236   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.744434   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744621   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744723   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.744901   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.745065   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.745083   72304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:11.017816   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:11.017844   72304 machine.go:97] duration metric: took 1.24624593s to provisionDockerMachine
	I0425 20:03:11.017858   72304 start.go:293] postStartSetup for "default-k8s-diff-port-142196" (driver="kvm2")
	I0425 20:03:11.017871   72304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:11.017892   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.018195   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:11.018231   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.020759   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021067   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.021092   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.021403   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.021600   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.021729   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.106290   72304 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:11.111532   72304 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:11.111560   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:11.111645   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:11.111744   72304 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:11.111856   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:11.122216   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:11.150472   72304 start.go:296] duration metric: took 132.600197ms for postStartSetup
	I0425 20:03:11.150520   72304 fix.go:56] duration metric: took 20.199020729s for fixHost
	I0425 20:03:11.150544   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.153466   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.153798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.153824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.154055   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.154289   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154483   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154635   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.154824   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:11.154991   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:11.155001   72304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:11.255330   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075391.221756501
	
	I0425 20:03:11.255357   72304 fix.go:216] guest clock: 1714075391.221756501
	I0425 20:03:11.255365   72304 fix.go:229] Guest: 2024-04-25 20:03:11.221756501 +0000 UTC Remote: 2024-04-25 20:03:11.15052524 +0000 UTC m=+294.908822896 (delta=71.231261ms)
	I0425 20:03:11.255384   72304 fix.go:200] guest clock delta is within tolerance: 71.231261ms
	I0425 20:03:11.255388   72304 start.go:83] releasing machines lock for "default-k8s-diff-port-142196", held for 20.303917474s
	I0425 20:03:11.255419   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.255700   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:11.258740   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259076   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.259104   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259414   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.259906   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260102   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260197   72304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:11.260241   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.260350   72304 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:11.260374   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.262843   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263001   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263216   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263245   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263365   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263480   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263669   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263679   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263864   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264026   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264039   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.264203   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.280701   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .Start
	I0425 20:03:11.280895   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring networks are active...
	I0425 20:03:11.281729   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network default is active
	I0425 20:03:11.282158   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network mk-old-k8s-version-210442 is active
	I0425 20:03:11.282639   72712 main.go:141] libmachine: (old-k8s-version-210442) Getting domain xml...
	I0425 20:03:11.283399   72712 main.go:141] libmachine: (old-k8s-version-210442) Creating domain...
	I0425 20:03:11.339564   72304 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:11.364667   72304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:11.526308   72304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:11.533487   72304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:11.533563   72304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:11.552090   72304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:11.552120   72304 start.go:494] detecting cgroup driver to use...
	I0425 20:03:11.552196   72304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:11.569573   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:11.584425   72304 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:11.584489   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:11.599083   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:11.613739   72304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:11.739574   72304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:11.911318   72304 docker.go:233] disabling docker service ...
	I0425 20:03:11.911390   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:11.928743   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:11.946101   72304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:12.112740   72304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:12.246863   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:12.269551   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:12.298838   72304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:12.298907   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.312059   72304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:12.312113   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.324076   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.336239   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.350088   72304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:12.368362   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.385406   72304 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.407195   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.420065   72304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:12.431195   72304 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:12.431260   72304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:12.446263   72304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:12.457137   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:12.622756   72304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:12.799932   72304 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:12.800012   72304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:12.807795   72304 start.go:562] Will wait 60s for crictl version
	I0425 20:03:12.807862   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:03:12.813860   72304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:12.861249   72304 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:12.861327   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.896140   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.942768   72304 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:09.079550   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0425 20:03:09.079607   72220 cache_images.go:123] Successfully loaded all cached images
	I0425 20:03:09.079615   72220 cache_images.go:92] duration metric: took 16.470485982s to LoadCachedImages
	I0425 20:03:09.079629   72220 kubeadm.go:928] updating node { 192.168.72.142 8443 v1.30.0 crio true true} ...
	I0425 20:03:09.079764   72220 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-744552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:09.079839   72220 ssh_runner.go:195] Run: crio config
	I0425 20:03:09.139170   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:09.139194   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:09.139206   72220 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:09.139225   72220 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.142 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-744552 NodeName:no-preload-744552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:09.139365   72220 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-744552"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:09.139426   72220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:09.151828   72220 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:09.151884   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:09.163310   72220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0425 20:03:09.183132   72220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:09.203038   72220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0425 20:03:09.223717   72220 ssh_runner.go:195] Run: grep 192.168.72.142	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:09.228467   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:09.243976   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:09.361475   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:09.380862   72220 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552 for IP: 192.168.72.142
	I0425 20:03:09.380886   72220 certs.go:194] generating shared ca certs ...
	I0425 20:03:09.380901   72220 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:09.381076   72220 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:09.381132   72220 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:09.381147   72220 certs.go:256] generating profile certs ...
	I0425 20:03:09.381254   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/client.key
	I0425 20:03:09.381337   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key.a705cb96
	I0425 20:03:09.381392   72220 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key
	I0425 20:03:09.381538   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:09.381586   72220 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:09.381601   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:09.381638   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:09.381668   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:09.381702   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:09.381761   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:09.382459   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:09.423895   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:09.462481   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:09.491394   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:09.532779   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 20:03:09.569107   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 20:03:09.597381   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:09.623962   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:09.651141   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:09.677295   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:09.702404   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:09.729275   72220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:09.748421   72220 ssh_runner.go:195] Run: openssl version
	I0425 20:03:09.754848   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:09.768121   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774468   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774529   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.783568   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:09.799120   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:09.812983   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818660   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818740   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.826091   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:09.840115   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:09.853372   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858387   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858455   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.864693   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:09.876755   72220 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:09.882829   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:09.890219   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:09.897091   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:09.906017   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:09.913154   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:09.919989   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:09.926552   72220 kubeadm.go:391] StartCluster: {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:09.926671   72220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:09.926734   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:09.971983   72220 cri.go:89] found id: ""
	I0425 20:03:09.972071   72220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:09.983371   72220 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:09.983399   72220 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:09.983406   72220 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:09.983451   72220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:09.994047   72220 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:09.995080   72220 kubeconfig.go:125] found "no-preload-744552" server: "https://192.168.72.142:8443"
	I0425 20:03:09.997202   72220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:10.007666   72220 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.142
	I0425 20:03:10.007703   72220 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:10.007713   72220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:10.007752   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:10.049581   72220 cri.go:89] found id: ""
	I0425 20:03:10.049679   72220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:10.071032   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:10.083240   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:10.083267   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:10.083314   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:10.093444   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:10.093507   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:10.104291   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:10.114596   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:10.114659   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:10.125118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.138299   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:10.138362   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.152185   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:10.163493   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:10.163555   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:10.177214   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:10.188286   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:10.312536   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.497483   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.184911769s)
	I0425 20:03:11.497531   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.753732   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.871246   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.968366   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:11.968445   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.468885   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.968598   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:13.037502   72220 api_server.go:72] duration metric: took 1.069135698s to wait for apiserver process to appear ...
	I0425 20:03:13.037542   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:13.037568   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:13.038540   72220 api_server.go:269] stopped: https://192.168.72.142:8443/healthz: Get "https://192.168.72.142:8443/healthz": dial tcp 192.168.72.142:8443: connect: connection refused
	I0425 20:03:13.537713   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:12.944206   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:12.947412   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.947822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:12.947852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.948086   72304 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:12.953504   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:12.969171   72304 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:12.969344   72304 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:12.969402   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:13.016509   72304 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:13.016585   72304 ssh_runner.go:195] Run: which lz4
	I0425 20:03:13.022023   72304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:13.027861   72304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:13.027896   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:14.913405   72304 crio.go:462] duration metric: took 1.891428846s to copy over tarball
	I0425 20:03:14.913466   72304 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:12.659136   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting to get IP...
	I0425 20:03:12.660227   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.660770   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.660843   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.660724   73691 retry.go:31] will retry after 234.96602ms: waiting for machine to come up
	I0425 20:03:12.897395   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.897966   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.897993   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.897913   73691 retry.go:31] will retry after 387.692223ms: waiting for machine to come up
	I0425 20:03:13.287742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.288414   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.288443   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.288397   73691 retry.go:31] will retry after 461.897892ms: waiting for machine to come up
	I0425 20:03:13.752061   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.752574   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.752603   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.752513   73691 retry.go:31] will retry after 452.347315ms: waiting for machine to come up
	I0425 20:03:14.206275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.206684   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.206708   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.206629   73691 retry.go:31] will retry after 466.12355ms: waiting for machine to come up
	I0425 20:03:14.674265   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.674788   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.674818   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.674735   73691 retry.go:31] will retry after 697.70071ms: waiting for machine to come up
	I0425 20:03:15.373862   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:15.374297   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:15.374325   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:15.374252   73691 retry.go:31] will retry after 835.73273ms: waiting for machine to come up
	I0425 20:03:16.211394   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:16.211870   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:16.211902   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:16.211815   73691 retry.go:31] will retry after 1.26739043s: waiting for machine to come up
	I0425 20:03:16.441793   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.441829   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.441848   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.506023   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.506057   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.538293   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.544891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:16.544925   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.038519   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.049842   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.049883   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.538420   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.545891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.545929   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:18.038192   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:18.042957   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:03:18.063131   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:18.063171   72220 api_server.go:131] duration metric: took 5.025619242s to wait for apiserver health ...
	I0425 20:03:18.063182   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:18.063192   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:18.405047   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:18.552639   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:18.565507   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:18.591534   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:17.662135   72304 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.748640149s)
	I0425 20:03:17.662171   72304 crio.go:469] duration metric: took 2.748741671s to extract the tarball
	I0425 20:03:17.662184   72304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:17.706288   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:17.773537   72304 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:03:17.773565   72304 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:03:17.773575   72304 kubeadm.go:928] updating node { 192.168.39.123 8444 v1.30.0 crio true true} ...
	I0425 20:03:17.773709   72304 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142196 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:17.773799   72304 ssh_runner.go:195] Run: crio config
	I0425 20:03:17.836354   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:17.836379   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:17.836391   72304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:17.836411   72304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142196 NodeName:default-k8s-diff-port-142196 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:17.836545   72304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142196"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:17.836599   72304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:17.848441   72304 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:17.848506   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:17.860320   72304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0425 20:03:17.885528   72304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:17.905701   72304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0425 20:03:17.925064   72304 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:17.930085   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:17.944507   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:18.108208   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:18.134428   72304 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196 for IP: 192.168.39.123
	I0425 20:03:18.134456   72304 certs.go:194] generating shared ca certs ...
	I0425 20:03:18.134479   72304 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:18.134672   72304 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:18.134745   72304 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:18.134761   72304 certs.go:256] generating profile certs ...
	I0425 20:03:18.134870   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/client.key
	I0425 20:03:18.245553   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key.1fb61bcb
	I0425 20:03:18.245666   72304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key
	I0425 20:03:18.245833   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:18.245880   72304 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:18.245894   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:18.245934   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:18.245964   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:18.245997   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:18.246058   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:18.246994   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:18.293000   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:18.322296   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:18.358060   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:18.390999   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0425 20:03:18.420333   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:18.450050   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:18.477983   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:18.506030   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:18.538394   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:18.574361   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:18.610827   72304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:18.634141   72304 ssh_runner.go:195] Run: openssl version
	I0425 20:03:18.640647   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:18.653988   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659400   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659458   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.665868   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:18.679247   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:18.692272   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697356   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697410   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.703694   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:18.716412   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:18.733362   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739598   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739651   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.748175   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:18.764492   72304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:18.770594   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:18.777414   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:18.784614   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:18.793453   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:18.800721   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:18.807982   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:18.814836   72304 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:18.814942   72304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:18.814992   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.864771   72304 cri.go:89] found id: ""
	I0425 20:03:18.864834   72304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:18.878200   72304 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:18.878238   72304 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:18.878245   72304 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:18.878305   72304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:18.892071   72304 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:18.892973   72304 kubeconfig.go:125] found "default-k8s-diff-port-142196" server: "https://192.168.39.123:8444"
	I0425 20:03:18.894860   72304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:18.907959   72304 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.123
	I0425 20:03:18.907989   72304 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:18.907998   72304 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:18.908045   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.951245   72304 cri.go:89] found id: ""
	I0425 20:03:18.951311   72304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:18.980033   72304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:18.995453   72304 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:18.995473   72304 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:18.995524   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0425 20:03:19.007409   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:19.007470   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:19.019782   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0425 20:03:19.031410   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:19.031493   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:19.043439   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.055936   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:19.055999   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.067986   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0425 20:03:19.080785   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:19.080869   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:19.092802   72304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:19.105024   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.240077   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.259510   72304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.019382485s)
	I0425 20:03:20.259544   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.489833   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.599319   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.784451   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:20.784606   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.284759   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:17.480654   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:17.481045   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:17.481094   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:17.481007   73691 retry.go:31] will retry after 1.238487953s: waiting for machine to come up
	I0425 20:03:18.720512   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:18.720940   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:18.720965   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:18.720902   73691 retry.go:31] will retry after 2.277078909s: waiting for machine to come up
	I0425 20:03:20.999749   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:21.000275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:21.000305   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:21.000223   73691 retry.go:31] will retry after 2.81059851s: waiting for machine to come up
	I0425 20:03:18.940880   72220 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:18.983894   72220 system_pods.go:61] "coredns-7db6d8ff4d-67sp6" [0fc3ee18-e3fe-4f4a-a5bd-4d6e3497bfa3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:18.983953   72220 system_pods.go:61] "etcd-no-preload-744552" [f3768d08-4cc6-42aa-9d1c-b0fd5d6ffed5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:18.983975   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [9d927e1f-4ddb-4b54-b1f1-f5248cb51745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:18.983984   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [cc71ce6c-22ba-4189-99dc-dd2da6506d37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:18.983993   72220 system_pods.go:61] "kube-proxy-whkbk" [a22b51a9-4854-41f5-bb5a-a81920a09b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0425 20:03:18.984026   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [5f01cd76-d6b7-4033-9aa9-38cac91965d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:18.984037   72220 system_pods.go:61] "metrics-server-569cc877fc-6n2gd" [03283a78-d44f-4f60-9743-680c18aeace3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:18.984052   72220 system_pods.go:61] "storage-provisioner" [4211811e-85ce-4da2-bc16-16909c26ced7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0425 20:03:18.984064   72220 system_pods.go:74] duration metric: took 392.509163ms to wait for pod list to return data ...
	I0425 20:03:18.984077   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:18.989373   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:18.989405   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:18.989424   72220 node_conditions.go:105] duration metric: took 5.341625ms to run NodePressure ...
	I0425 20:03:18.989446   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.809313   72220 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818730   72220 kubeadm.go:733] kubelet initialised
	I0425 20:03:19.818753   72220 kubeadm.go:734] duration metric: took 9.41696ms waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818761   72220 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:19.825762   72220 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:21.834658   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:21.785434   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.855046   72304 api_server.go:72] duration metric: took 1.070594042s to wait for apiserver process to appear ...
	I0425 20:03:21.855127   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:21.855156   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:21.855709   72304 api_server.go:269] stopped: https://192.168.39.123:8444/healthz: Get "https://192.168.39.123:8444/healthz": dial tcp 192.168.39.123:8444: connect: connection refused
	I0425 20:03:22.355555   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.430068   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.430099   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.430115   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.487089   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.487124   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.855301   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.861270   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:24.861299   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.356007   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.360802   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.360839   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.855336   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.861719   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.861753   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:23.812963   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:23.813457   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:23.813476   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:23.813429   73691 retry.go:31] will retry after 2.508562986s: waiting for machine to come up
	I0425 20:03:26.323267   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:26.323733   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:26.323761   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:26.323699   73691 retry.go:31] will retry after 4.475703543s: waiting for machine to come up
	I0425 20:03:26.355254   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.360977   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.361011   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:26.855547   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.860178   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.860203   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.355819   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.360466   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:27.360491   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.856219   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.861706   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:03:27.868486   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:27.868525   72304 api_server.go:131] duration metric: took 6.013385579s to wait for apiserver health ...
	I0425 20:03:27.868536   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:27.868544   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:27.870534   72304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:24.335382   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:24.335415   72220 pod_ready.go:81] duration metric: took 4.509621487s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:24.335427   72220 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:26.342530   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:28.841444   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:27.871863   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:27.885767   72304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:27.910270   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:27.922984   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:27.923016   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:27.923024   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:27.923030   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:27.923036   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:27.923041   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:03:27.923052   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:27.923057   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:27.923061   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:03:27.923067   72304 system_pods.go:74] duration metric: took 12.774358ms to wait for pod list to return data ...
	I0425 20:03:27.923073   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:27.927553   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:27.927582   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:27.927596   72304 node_conditions.go:105] duration metric: took 4.517775ms to run NodePressure ...
	I0425 20:03:27.927616   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:28.213013   72304 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217836   72304 kubeadm.go:733] kubelet initialised
	I0425 20:03:28.217860   72304 kubeadm.go:734] duration metric: took 4.809ms waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217869   72304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:28.225122   72304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.229920   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229940   72304 pod_ready.go:81] duration metric: took 4.794976ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.229948   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229954   72304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.234362   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234380   72304 pod_ready.go:81] duration metric: took 4.417955ms for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.234388   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234394   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.238885   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238904   72304 pod_ready.go:81] duration metric: took 4.504378ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.238917   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238924   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.314420   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314446   72304 pod_ready.go:81] duration metric: took 75.511589ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.314457   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314464   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.714128   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714165   72304 pod_ready.go:81] duration metric: took 399.694231ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.714178   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714187   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.113925   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113958   72304 pod_ready.go:81] duration metric: took 399.760651ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.113971   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113977   72304 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.514107   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514132   72304 pod_ready.go:81] duration metric: took 400.147308ms for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.514142   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514149   72304 pod_ready.go:38] duration metric: took 1.296270699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:29.514167   72304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:03:29.528766   72304 ops.go:34] apiserver oom_adj: -16
	I0425 20:03:29.528791   72304 kubeadm.go:591] duration metric: took 10.650540723s to restartPrimaryControlPlane
	I0425 20:03:29.528801   72304 kubeadm.go:393] duration metric: took 10.713975851s to StartCluster
	I0425 20:03:29.528816   72304 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.528887   72304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:29.530674   72304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.530951   72304 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:03:29.532792   72304 out.go:177] * Verifying Kubernetes components...
	I0425 20:03:29.531039   72304 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:03:29.531203   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:29.534328   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:29.534349   72304 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534377   72304 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534383   72304 addons.go:243] addon metrics-server should already be in state true
	I0425 20:03:29.534331   72304 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534416   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534441   72304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142196"
	I0425 20:03:29.534334   72304 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534536   72304 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534549   72304 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:03:29.534584   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534786   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534814   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534839   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534815   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534956   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.535000   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.551165   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0425 20:03:29.551680   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552007   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0425 20:03:29.552399   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.552419   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.552445   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552864   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553003   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.553028   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.553066   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.553409   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553621   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0425 20:03:29.554006   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.554024   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.554057   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.554555   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.554579   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.554908   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.555432   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.555487   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.557216   72304 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.557238   72304 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:03:29.557267   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.557642   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.557675   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.570559   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0425 20:03:29.571013   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.571538   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.571562   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.571944   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.572152   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.574003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.576061   72304 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:03:29.575108   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33777
	I0425 20:03:29.575580   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0425 20:03:29.577356   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:03:29.577374   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:03:29.577394   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.577861   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.577964   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.578333   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578356   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578514   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578543   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578735   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578909   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578947   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.579603   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.579633   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.580871   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.582436   72304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:29.581297   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.581851   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.583941   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.583971   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.583994   72304 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.584021   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:03:29.584031   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.584044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.584282   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.584430   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.586538   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.586880   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.586901   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.587119   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.587314   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.587470   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.587560   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.595882   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0425 20:03:29.596234   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.596711   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.596728   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.597146   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.597321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.598599   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.598799   72304 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:29.598811   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:03:29.598822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.600829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.601149   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.601409   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.601479   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.601537   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.772228   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:29.799159   72304 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:29.893622   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:03:29.893647   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:03:29.895090   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.919651   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:03:29.919673   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:03:29.929992   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:30.004488   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:30.004519   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:03:30.061525   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.113425632s)
	I0425 20:03:31.043511   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.148338843s)
	I0425 20:03:31.043539   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043587   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043524   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043629   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043894   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043910   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043946   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.043953   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043964   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043973   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043992   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044107   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044159   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044199   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044209   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044219   72304 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-142196"
	I0425 20:03:31.044216   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044237   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044253   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044262   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044542   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044566   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044662   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044682   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.052429   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.052451   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.052675   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.052694   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.055680   72304 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0425 20:03:31.057271   72304 addons.go:505] duration metric: took 1.526243989s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0425 20:03:32.187768   71966 start.go:364] duration metric: took 56.585448027s to acquireMachinesLock for "embed-certs-512173"
	I0425 20:03:32.187838   71966 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:32.187849   71966 fix.go:54] fixHost starting: 
	I0425 20:03:32.188220   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:32.188266   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:32.207172   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0425 20:03:32.207627   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:32.208170   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:03:32.208196   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:32.208493   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:32.208700   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:32.208837   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:03:32.210552   71966 fix.go:112] recreateIfNeeded on embed-certs-512173: state=Stopped err=<nil>
	I0425 20:03:32.210577   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	W0425 20:03:32.210741   71966 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:32.213400   71966 out.go:177] * Restarting existing kvm2 VM for "embed-certs-512173" ...
	I0425 20:03:30.803467   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804014   72712 main.go:141] libmachine: (old-k8s-version-210442) Found IP for machine: 192.168.61.136
	I0425 20:03:30.804041   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserving static IP address...
	I0425 20:03:30.804057   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has current primary IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804495   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.804535   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | skip adding static IP to network mk-old-k8s-version-210442 - found existing host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"}
	I0425 20:03:30.804562   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserved static IP address: 192.168.61.136
	I0425 20:03:30.804582   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting for SSH to be available...
	I0425 20:03:30.804599   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Getting to WaitForSSH function...
	I0425 20:03:30.807110   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807533   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.807556   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807706   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH client type: external
	I0425 20:03:30.807725   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa (-rw-------)
	I0425 20:03:30.807767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:30.807783   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | About to run SSH command:
	I0425 20:03:30.807815   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | exit 0
	I0425 20:03:30.935091   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:30.935445   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetConfigRaw
	I0425 20:03:30.936168   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:30.938767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939193   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.939246   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939428   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 20:03:30.939630   72712 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:30.939649   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:30.939870   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:30.942320   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.942771   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942923   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:30.943113   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943306   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943468   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:30.943640   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:30.943842   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:30.943854   72712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:31.052598   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:31.052625   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.052821   72712 buildroot.go:166] provisioning hostname "old-k8s-version-210442"
	I0425 20:03:31.052844   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.053080   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.056324   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056713   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.056745   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056885   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.057056   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057190   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057375   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.057549   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.057724   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.057742   72712 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210442 && echo "old-k8s-version-210442" | sudo tee /etc/hostname
	I0425 20:03:31.188461   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210442
	
	I0425 20:03:31.188494   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.191628   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192088   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.192117   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192332   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.192519   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192655   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192767   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.192944   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.193142   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.193167   72712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:31.317374   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:31.317402   72712 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:31.317436   72712 buildroot.go:174] setting up certificates
	I0425 20:03:31.317447   72712 provision.go:84] configureAuth start
	I0425 20:03:31.317461   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.317778   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:31.321012   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321388   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.321421   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321698   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.323976   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324326   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.324354   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324523   72712 provision.go:143] copyHostCerts
	I0425 20:03:31.324573   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:31.324584   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:31.324656   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:31.324764   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:31.324778   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:31.324807   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:31.324879   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:31.324890   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:31.324915   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:31.324978   72712 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210442 san=[127.0.0.1 192.168.61.136 localhost minikube old-k8s-version-210442]
	I0425 20:03:31.410674   72712 provision.go:177] copyRemoteCerts
	I0425 20:03:31.410728   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:31.410755   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.413170   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413449   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.413491   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413634   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.413832   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.413988   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.414156   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:31.502759   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:31.536662   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0425 20:03:31.565106   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:31.593254   72712 provision.go:87] duration metric: took 275.793443ms to configureAuth
	I0425 20:03:31.593287   72712 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:31.593621   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 20:03:31.593720   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.596515   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.596827   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.596859   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.597057   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.597287   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597448   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597624   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.597775   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.597927   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.597942   72712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:31.925149   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:31.925182   72712 machine.go:97] duration metric: took 985.540626ms to provisionDockerMachine
	I0425 20:03:31.925199   72712 start.go:293] postStartSetup for "old-k8s-version-210442" (driver="kvm2")
	I0425 20:03:31.925211   72712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:31.925258   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:31.925560   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:31.925596   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.928532   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.928982   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.929013   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.929232   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.929458   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.929637   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.929787   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.023009   72712 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:32.029391   72712 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:32.029426   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:32.029508   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:32.029576   72712 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:32.029664   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:32.046596   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:32.077323   72712 start.go:296] duration metric: took 152.112632ms for postStartSetup
	I0425 20:03:32.077396   72712 fix.go:56] duration metric: took 20.821829703s for fixHost
	I0425 20:03:32.077425   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.080136   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080477   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.080526   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080636   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.080836   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081067   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081283   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.081493   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:32.081695   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:32.081711   72712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:32.187617   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075412.163072845
	
	I0425 20:03:32.187642   72712 fix.go:216] guest clock: 1714075412.163072845
	I0425 20:03:32.187652   72712 fix.go:229] Guest: 2024-04-25 20:03:32.163072845 +0000 UTC Remote: 2024-04-25 20:03:32.07740605 +0000 UTC m=+254.767943919 (delta=85.666795ms)
	I0425 20:03:32.187675   72712 fix.go:200] guest clock delta is within tolerance: 85.666795ms
	I0425 20:03:32.187682   72712 start.go:83] releasing machines lock for "old-k8s-version-210442", held for 20.932154384s
	I0425 20:03:32.187709   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.187998   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:32.190538   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.190898   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.190932   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.191077   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191817   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191996   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.192076   72712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:32.192116   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.192208   72712 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:32.192230   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.194821   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.194988   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195191   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195212   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195334   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195368   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195500   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195673   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195677   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195847   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195866   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196063   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.196083   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196219   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.276462   72712 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:32.300979   72712 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:30.842282   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:32.843750   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.843779   72220 pod_ready.go:81] duration metric: took 8.508343704s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.843791   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850293   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.850316   72220 pod_ready.go:81] duration metric: took 6.517764ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850327   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855621   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.855657   72220 pod_ready.go:81] duration metric: took 5.31225ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855671   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860450   72220 pod_ready.go:92] pod "kube-proxy-whkbk" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.860483   72220 pod_ready.go:81] duration metric: took 4.797706ms for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860505   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865268   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.865286   72220 pod_ready.go:81] duration metric: took 4.774354ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865294   72220 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.458446   72712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:32.465434   72712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:32.465518   72712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:32.486929   72712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:32.486954   72712 start.go:494] detecting cgroup driver to use...
	I0425 20:03:32.487019   72712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:32.509425   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:32.530999   72712 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:32.531059   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:32.547280   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:32.563594   72712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:32.699207   72712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:32.875013   72712 docker.go:233] disabling docker service ...
	I0425 20:03:32.875096   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:32.897149   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:32.916105   72712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:33.071143   72712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:33.231529   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:33.252919   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:33.277388   72712 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0425 20:03:33.277457   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.290889   72712 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:33.290953   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.305488   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.319263   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.332961   72712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:33.354086   72712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:33.373431   72712 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:33.373517   72712 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:33.398458   72712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:33.418683   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:33.595555   72712 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:33.808015   72712 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:33.810391   72712 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:33.817593   72712 start.go:562] Will wait 60s for crictl version
	I0425 20:03:33.817646   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:33.823381   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:33.866310   72712 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:33.866411   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.905561   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.952764   72712 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0425 20:03:32.214679   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Start
	I0425 20:03:32.214880   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring networks are active...
	I0425 20:03:32.215746   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network default is active
	I0425 20:03:32.216106   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network mk-embed-certs-512173 is active
	I0425 20:03:32.216566   71966 main.go:141] libmachine: (embed-certs-512173) Getting domain xml...
	I0425 20:03:32.217397   71966 main.go:141] libmachine: (embed-certs-512173) Creating domain...
	I0425 20:03:33.554665   71966 main.go:141] libmachine: (embed-certs-512173) Waiting to get IP...
	I0425 20:03:33.555670   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.556123   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.556186   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.556089   73884 retry.go:31] will retry after 278.996701ms: waiting for machine to come up
	I0425 20:03:33.836750   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.837273   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.837301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.837244   73884 retry.go:31] will retry after 324.410317ms: waiting for machine to come up
	I0425 20:03:34.163017   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.163490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.163518   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.163457   73884 retry.go:31] will retry after 403.985826ms: waiting for machine to come up
	I0425 20:03:34.568824   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.569364   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.569397   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.569330   73884 retry.go:31] will retry after 427.12179ms: waiting for machine to come up
	I0425 20:03:34.998092   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.998684   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.998709   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.998646   73884 retry.go:31] will retry after 710.71475ms: waiting for machine to come up
	I0425 20:03:35.710643   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:35.711707   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:35.711736   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:35.711616   73884 retry.go:31] will retry after 806.283051ms: waiting for machine to come up
	I0425 20:03:31.803034   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:33.813548   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:35.304283   72304 node_ready.go:49] node "default-k8s-diff-port-142196" has status "Ready":"True"
	I0425 20:03:35.304311   72304 node_ready.go:38] duration metric: took 5.505123781s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:35.304323   72304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:35.311480   72304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320910   72304 pod_ready.go:92] pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:35.320938   72304 pod_ready.go:81] duration metric: took 9.425507ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320953   72304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:33.954161   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:33.957316   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.957778   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:33.957811   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.958080   72712 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:33.964467   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:33.984277   72712 kubeadm.go:877] updating cluster {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:33.984437   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 20:03:33.984499   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:34.049402   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:34.049479   72712 ssh_runner.go:195] Run: which lz4
	I0425 20:03:34.055519   72712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:34.061481   72712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:34.061522   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0425 20:03:36.271646   72712 crio.go:462] duration metric: took 2.216165414s to copy over tarball
	I0425 20:03:36.271722   72712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:34.877483   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:37.373822   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:36.519514   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:36.520052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:36.520085   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:36.519968   73884 retry.go:31] will retry after 990.986618ms: waiting for machine to come up
	I0425 20:03:37.513151   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:37.513636   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:37.513669   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:37.513574   73884 retry.go:31] will retry after 1.371471682s: waiting for machine to come up
	I0425 20:03:38.886926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:38.887491   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:38.887527   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:38.887415   73884 retry.go:31] will retry after 1.633505345s: waiting for machine to come up
	I0425 20:03:40.523438   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:40.523975   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:40.524004   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:40.523926   73884 retry.go:31] will retry after 2.280577933s: waiting for machine to come up
	I0425 20:03:37.330040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.350040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.894331   72712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.622580176s)
	I0425 20:03:39.894364   72712 crio.go:469] duration metric: took 3.62268463s to extract the tarball
	I0425 20:03:39.894373   72712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:39.965071   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:40.009534   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:40.009561   72712 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:03:40.009629   72712 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.009651   72712 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.009677   72712 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.009662   72712 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.009794   72712 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.009920   72712 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.010033   72712 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.010241   72712 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0425 20:03:40.011305   72712 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.011334   72712 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.011346   72712 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.011686   72712 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.012422   72712 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.012429   72712 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.012437   72712 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0425 20:03:40.012546   72712 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.143545   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.155203   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.157842   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.158081   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.161210   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.166515   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.181859   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0425 20:03:40.301699   72712 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0425 20:03:40.301759   72712 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.301805   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.379386   72712 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0425 20:03:40.379445   72712 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.379490   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406119   72712 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0425 20:03:40.406231   72712 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.406174   72712 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0425 20:03:40.406338   72712 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.406365   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406389   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420450   72712 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0425 20:03:40.420495   72712 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.420548   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420461   72712 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0425 20:03:40.420629   72712 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.420677   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430055   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.430110   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.430232   72712 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0425 20:03:40.430263   72712 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0425 20:03:40.430274   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.430277   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.430303   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430326   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.430389   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.582980   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0425 20:03:40.583094   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0425 20:03:40.587500   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0425 20:03:40.587564   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0425 20:03:40.587579   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0425 20:03:40.587650   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0425 20:03:40.587697   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0425 20:03:40.625942   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0425 20:03:40.941957   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:41.096086   72712 cache_images.go:92] duration metric: took 1.086507707s to LoadCachedImages
	W0425 20:03:41.096249   72712 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0425 20:03:41.096279   72712 kubeadm.go:928] updating node { 192.168.61.136 8443 v1.20.0 crio true true} ...
	I0425 20:03:41.096415   72712 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210442 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:41.096509   72712 ssh_runner.go:195] Run: crio config
	I0425 20:03:41.169311   72712 cni.go:84] Creating CNI manager for ""
	I0425 20:03:41.169341   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:41.169357   72712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:41.169397   72712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210442 NodeName:old-k8s-version-210442 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0425 20:03:41.169570   72712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210442"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:41.169639   72712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0425 20:03:41.182191   72712 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:41.182283   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:41.193546   72712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0425 20:03:41.218220   72712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:41.238647   72712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0425 20:03:41.259040   72712 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:41.263603   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:41.278007   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:41.425587   72712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:41.450990   72712 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442 for IP: 192.168.61.136
	I0425 20:03:41.451013   72712 certs.go:194] generating shared ca certs ...
	I0425 20:03:41.451034   72712 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:41.451225   72712 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:41.451307   72712 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:41.451323   72712 certs.go:256] generating profile certs ...
	I0425 20:03:41.451449   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.key
	I0425 20:03:41.451528   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key.1533c9ac
	I0425 20:03:41.451587   72712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key
	I0425 20:03:41.451789   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:41.451860   72712 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:41.451880   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:41.451915   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:41.451945   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:41.451968   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:41.452023   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:41.452870   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:41.510467   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:41.555595   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:41.606059   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:41.648206   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0425 20:03:41.690090   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:41.727674   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:41.766537   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:41.799524   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:41.828668   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:41.860964   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:41.890272   72712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:41.911787   72712 ssh_runner.go:195] Run: openssl version
	I0425 20:03:41.918926   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:41.933194   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.938995   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.939060   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.945934   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:41.959859   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:41.974906   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.980931   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.981006   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.987789   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:42.002455   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:42.016797   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023789   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023853   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.033189   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:42.047467   72712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:42.053552   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:42.063130   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:42.070290   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:42.079527   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:42.087983   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:42.096658   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:42.103477   72712 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:42.103596   72712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:42.103649   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.155980   72712 cri.go:89] found id: ""
	I0425 20:03:42.156085   72712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:42.172499   72712 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:42.172525   72712 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:42.172532   72712 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:42.172580   72712 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:42.187864   72712 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:42.188948   72712 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210442" does not appear in /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:42.189659   72712 kubeconfig.go:62] /home/jenkins/minikube-integration/18757-6355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210442" cluster setting kubeconfig missing "old-k8s-version-210442" context setting]
	I0425 20:03:42.190635   72712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:42.192402   72712 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:42.207284   72712 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.136
	I0425 20:03:42.207318   72712 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:42.207329   72712 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:42.207403   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.251184   72712 cri.go:89] found id: ""
	I0425 20:03:42.251257   72712 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:42.271727   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:42.289161   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:42.289184   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:42.289237   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:42.302492   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:42.302588   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:42.317790   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:42.329940   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:42.330002   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:42.342772   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:39.375028   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:41.871821   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.805640   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:42.806121   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:42.806148   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:42.806072   73884 retry.go:31] will retry after 2.588054599s: waiting for machine to come up
	I0425 20:03:45.395282   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:45.395712   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:45.395759   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:45.395662   73884 retry.go:31] will retry after 3.473643777s: waiting for machine to come up
	I0425 20:03:41.329479   72304 pod_ready.go:92] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.329511   72304 pod_ready.go:81] duration metric: took 6.008549199s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.329523   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335660   72304 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.335688   72304 pod_ready.go:81] duration metric: took 6.15557ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335700   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341409   72304 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.341433   72304 pod_ready.go:81] duration metric: took 5.723469ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341446   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347145   72304 pod_ready.go:92] pod "kube-proxy-bqmtp" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.347167   72304 pod_ready.go:81] duration metric: took 5.713095ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347179   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376913   72304 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.376939   72304 pod_ready.go:81] duration metric: took 29.751827ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376951   72304 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:43.383378   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:45.884869   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.356480   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:42.357280   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:42.370403   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:42.384245   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:42.384332   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:42.398271   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:42.412361   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:42.575076   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.186458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.480114   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.594128   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.707129   72712 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:43.707221   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.207406   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.707733   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.208100   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.708041   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.207966   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.707255   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:47.207754   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:43.873747   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:46.374439   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:48.871928   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:48.872457   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:48.872490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:48.872393   73884 retry.go:31] will retry after 4.148424216s: waiting for machine to come up
	I0425 20:03:48.384599   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.883246   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:47.707730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.208213   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.707685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.207879   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.707914   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.208278   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.707691   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.207600   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.707365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:52.207931   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.872282   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.872356   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.874452   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:53.022813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023343   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has current primary IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023367   71966 main.go:141] libmachine: (embed-certs-512173) Found IP for machine: 192.168.50.7
	I0425 20:03:53.023381   71966 main.go:141] libmachine: (embed-certs-512173) Reserving static IP address...
	I0425 20:03:53.023750   71966 main.go:141] libmachine: (embed-certs-512173) Reserved static IP address: 192.168.50.7
	I0425 20:03:53.023770   71966 main.go:141] libmachine: (embed-certs-512173) Waiting for SSH to be available...
	I0425 20:03:53.023791   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.023827   71966 main.go:141] libmachine: (embed-certs-512173) DBG | skip adding static IP to network mk-embed-certs-512173 - found existing host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"}
	I0425 20:03:53.023848   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Getting to WaitForSSH function...
	I0425 20:03:53.025753   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.026132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026244   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH client type: external
	I0425 20:03:53.026268   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa (-rw-------)
	I0425 20:03:53.026301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:53.026313   71966 main.go:141] libmachine: (embed-certs-512173) DBG | About to run SSH command:
	I0425 20:03:53.026325   71966 main.go:141] libmachine: (embed-certs-512173) DBG | exit 0
	I0425 20:03:53.158487   71966 main.go:141] libmachine: (embed-certs-512173) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:53.158846   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetConfigRaw
	I0425 20:03:53.159567   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.161881   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162200   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.162257   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162492   71966 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/config.json ...
	I0425 20:03:53.162658   71966 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:53.162675   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:53.162875   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.164797   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.165140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165256   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.165402   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165561   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165659   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.165815   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.165989   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.166002   71966 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:53.283185   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:53.283219   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283455   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:03:53.283480   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283690   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.286427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.286843   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286969   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.287164   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287350   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.287641   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.287881   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.287904   71966 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-512173 && echo "embed-certs-512173" | sudo tee /etc/hostname
	I0425 20:03:53.423037   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-512173
	
	I0425 20:03:53.423067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.425749   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.426140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426329   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.426501   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426640   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426747   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.426866   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.427015   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.427083   71966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-512173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-512173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-512173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:53.553687   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:53.553715   71966 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:53.553749   71966 buildroot.go:174] setting up certificates
	I0425 20:03:53.553758   71966 provision.go:84] configureAuth start
	I0425 20:03:53.553775   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.554053   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.556655   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.556995   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.557034   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.557121   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.559341   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559692   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.559718   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559897   71966 provision.go:143] copyHostCerts
	I0425 20:03:53.559970   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:53.559984   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:53.560049   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:53.560129   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:53.560136   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:53.560155   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:53.560203   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:53.560214   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:53.560233   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:53.560278   71966 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-512173 san=[127.0.0.1 192.168.50.7 embed-certs-512173 localhost minikube]
	I0425 20:03:53.621714   71966 provision.go:177] copyRemoteCerts
	I0425 20:03:53.621777   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:53.621804   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.624556   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.624883   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.624914   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.625128   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.625324   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.625458   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.625602   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:53.715477   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:03:53.743782   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:53.771468   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:53.798701   71966 provision.go:87] duration metric: took 244.92871ms to configureAuth
	I0425 20:03:53.798726   71966 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:53.798922   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:53.798991   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.801607   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.801946   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.801972   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.802187   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.802373   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802628   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.802833   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.802986   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.803000   71966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:54.117164   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:54.117193   71966 machine.go:97] duration metric: took 954.522384ms to provisionDockerMachine
	I0425 20:03:54.117207   71966 start.go:293] postStartSetup for "embed-certs-512173" (driver="kvm2")
	I0425 20:03:54.117219   71966 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:54.117238   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.117558   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:54.117591   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.120060   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.120454   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120575   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.120761   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.120891   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.121002   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.209919   71966 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:54.215633   71966 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:54.215663   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:54.215747   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:54.215860   71966 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:54.215996   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:54.227250   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:54.257169   71966 start.go:296] duration metric: took 139.949813ms for postStartSetup
	I0425 20:03:54.257212   71966 fix.go:56] duration metric: took 22.069363419s for fixHost
	I0425 20:03:54.257237   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.260255   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260588   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.260613   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260731   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.260928   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261099   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261266   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.261447   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:54.261644   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:54.261655   71966 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:54.376222   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075434.352338373
	
	I0425 20:03:54.376245   71966 fix.go:216] guest clock: 1714075434.352338373
	I0425 20:03:54.376255   71966 fix.go:229] Guest: 2024-04-25 20:03:54.352338373 +0000 UTC Remote: 2024-04-25 20:03:54.257217658 +0000 UTC m=+368.446046405 (delta=95.120715ms)
	I0425 20:03:54.376287   71966 fix.go:200] guest clock delta is within tolerance: 95.120715ms
	I0425 20:03:54.376295   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 22.188484297s
	I0425 20:03:54.376317   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.376600   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:54.379217   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379646   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.379678   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379869   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380436   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380633   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380729   71966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:54.380779   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.380857   71966 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:54.380880   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.383698   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384081   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384283   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384471   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.384610   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.384647   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384683   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384781   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.384821   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384982   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.385131   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.385330   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.468506   71966 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:54.493995   71966 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:54.642719   71966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:54.649565   71966 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:54.649632   71966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:54.667526   71966 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:54.667546   71966 start.go:494] detecting cgroup driver to use...
	I0425 20:03:54.667596   71966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:54.685384   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:54.701852   71966 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:54.701905   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:54.718559   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:54.734874   71966 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:54.858325   71966 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:55.045158   71966 docker.go:233] disabling docker service ...
	I0425 20:03:55.045219   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:55.061668   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:55.076486   71966 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:55.207287   71966 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:55.352537   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:55.369470   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:55.392638   71966 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:55.392718   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.404590   71966 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:55.404655   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.416129   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.427176   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.438632   71966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:55.450725   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.462912   71966 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.485340   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.498134   71966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:55.508378   71966 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:55.508451   71966 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:55.523073   71966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:55.533901   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:55.666845   71966 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:55.828131   71966 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:55.828199   71966 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:55.833768   71966 start.go:562] Will wait 60s for crictl version
	I0425 20:03:55.833824   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:03:55.838000   71966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:55.881652   71966 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:55.881753   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.917675   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.953046   71966 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:52.884447   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:54.884538   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.707459   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.208241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.707431   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.207538   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.707289   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.207319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.707625   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.207562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.708324   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:57.207348   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.373713   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.374476   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:55.954484   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:55.957214   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957611   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:55.957638   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957832   71966 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:55.962420   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:55.976512   71966 kubeadm.go:877] updating cluster {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:55.976626   71966 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:55.976694   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:56.019881   71966 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:56.019942   71966 ssh_runner.go:195] Run: which lz4
	I0425 20:03:56.024524   71966 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:56.029297   71966 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:56.029339   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:57.736602   71966 crio.go:462] duration metric: took 1.712117844s to copy over tarball
	I0425 20:03:57.736666   71966 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:04:00.331696   71966 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.594977915s)
	I0425 20:04:00.331739   71966 crio.go:469] duration metric: took 2.595109768s to extract the tarball
	I0425 20:04:00.331751   71966 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:04:00.375437   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:04:00.430963   71966 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:04:00.430987   71966 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:04:00.430994   71966 kubeadm.go:928] updating node { 192.168.50.7 8443 v1.30.0 crio true true} ...
	I0425 20:04:00.431081   71966 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-512173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:04:00.431154   71966 ssh_runner.go:195] Run: crio config
	I0425 20:04:00.487082   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:00.487106   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:00.487117   71966 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:04:00.487135   71966 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-512173 NodeName:embed-certs-512173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:04:00.487306   71966 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-512173"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:04:00.487378   71966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:04:00.498819   71966 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:04:00.498881   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:04:00.509212   71966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0425 20:04:00.527703   71966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:04:00.546867   71966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0425 20:04:00.566302   71966 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0425 20:04:00.570629   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:04:00.584123   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:00.717589   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:00.743108   71966 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173 for IP: 192.168.50.7
	I0425 20:04:00.743173   71966 certs.go:194] generating shared ca certs ...
	I0425 20:04:00.743201   71966 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:00.743397   71966 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:04:00.743462   71966 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:04:00.743480   71966 certs.go:256] generating profile certs ...
	I0425 20:04:00.743644   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/client.key
	I0425 20:04:00.743729   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key.4a0c231f
	I0425 20:04:00.743789   71966 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key
	I0425 20:04:00.743964   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:04:00.744019   71966 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:04:00.744033   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:04:00.744064   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:04:00.744093   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:04:00.744117   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:04:00.744158   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:04:00.745130   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:04:00.797856   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:04:00.848631   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:56.885355   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:58.885857   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.707868   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.208319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.207410   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.707562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.208006   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.708245   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.208178   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.707239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:02.207926   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.873851   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.372919   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:00.877499   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:04:01.210716   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0425 20:04:01.239562   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:04:01.267356   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:04:01.295649   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:04:01.323739   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:04:01.350440   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:04:01.379693   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:04:01.409347   71966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:04:01.429857   71966 ssh_runner.go:195] Run: openssl version
	I0425 20:04:01.437636   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:04:01.449656   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455022   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455074   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.461442   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:04:01.473323   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:04:01.485988   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491661   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491719   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.498567   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:04:01.510983   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:04:01.523098   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528619   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528667   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.535129   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:04:01.546668   71966 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:04:01.552076   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:04:01.558928   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:04:01.566406   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:04:01.574761   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:04:01.581250   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:04:01.588506   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:04:01.594844   71966 kubeadm.go:391] StartCluster: {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:04:01.594917   71966 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:04:01.594978   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.648050   71966 cri.go:89] found id: ""
	I0425 20:04:01.648155   71966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:04:01.664291   71966 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:04:01.664318   71966 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:04:01.664325   71966 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:04:01.664387   71966 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:04:01.678686   71966 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:04:01.680096   71966 kubeconfig.go:125] found "embed-certs-512173" server: "https://192.168.50.7:8443"
	I0425 20:04:01.682375   71966 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:04:01.699073   71966 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0425 20:04:01.699109   71966 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:04:01.699122   71966 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:04:01.699190   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.744556   71966 cri.go:89] found id: ""
	I0425 20:04:01.744633   71966 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:04:01.767121   71966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:04:01.778499   71966 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:04:01.778517   71966 kubeadm.go:156] found existing configuration files:
	
	I0425 20:04:01.778575   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:04:01.789171   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:04:01.789242   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:04:01.800000   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:04:01.811015   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:04:01.811078   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:04:01.821752   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.832900   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:04:01.832962   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.844058   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:04:01.854774   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:04:01.854824   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:04:01.866086   71966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:04:01.879229   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.180778   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.971467   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.202841   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.286951   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.412260   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:04:03.412375   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.913176   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.413418   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.443763   71966 api_server.go:72] duration metric: took 1.031501246s to wait for apiserver process to appear ...
	I0425 20:04:04.443796   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:04:04.443816   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:04.444334   71966 api_server.go:269] stopped: https://192.168.50.7:8443/healthz: Get "https://192.168.50.7:8443/healthz": dial tcp 192.168.50.7:8443: connect: connection refused
	I0425 20:04:04.943937   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:01.384590   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:03.885859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.707796   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.207913   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.708267   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.207491   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.707894   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.207346   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.707801   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.208283   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.707342   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:07.208190   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.381611   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:06.875270   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.463721   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.463767   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.463785   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.479254   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.479283   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.944812   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.949683   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:07.949710   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.444237   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.451663   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.451706   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.944231   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.949165   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.949194   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.444776   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.449703   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.449732   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.943865   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.948474   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.948509   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.444040   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.448740   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:10.448781   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.944487   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.950181   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:04:10.957455   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:04:10.957479   71966 api_server.go:131] duration metric: took 6.513676295s to wait for apiserver health ...
	I0425 20:04:10.957487   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:10.957496   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:10.959196   71966 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:04:06.384595   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:08.883972   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.707466   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.207370   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.707951   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.207604   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.708057   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.207422   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.707391   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.207510   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.707828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:12.207519   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.960795   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:04:10.977005   71966 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:04:11.001393   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:04:11.021408   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:04:11.021439   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:04:11.021453   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:04:11.021466   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:04:11.021478   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:04:11.021495   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:04:11.021502   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:04:11.021513   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:04:11.021521   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:04:11.021533   71966 system_pods.go:74] duration metric: took 20.120592ms to wait for pod list to return data ...
	I0425 20:04:11.021540   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:04:11.025328   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:04:11.025360   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:04:11.025374   71966 node_conditions.go:105] duration metric: took 3.826846ms to run NodePressure ...
	I0425 20:04:11.025394   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:11.304673   71966 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309061   71966 kubeadm.go:733] kubelet initialised
	I0425 20:04:11.309082   71966 kubeadm.go:734] duration metric: took 4.385794ms waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309089   71966 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:11.314583   71966 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.319490   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319515   71966 pod_ready.go:81] duration metric: took 4.900118ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.319524   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319534   71966 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.324084   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324101   71966 pod_ready.go:81] duration metric: took 4.557199ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.324108   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324113   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.328151   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328167   71966 pod_ready.go:81] duration metric: took 4.047894ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.328174   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328184   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.404944   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.404982   71966 pod_ready.go:81] duration metric: took 76.789573ms for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.404997   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.405006   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.805191   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805221   71966 pod_ready.go:81] duration metric: took 400.202708ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.805238   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805248   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.205817   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205847   71966 pod_ready.go:81] duration metric: took 400.591033ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.205858   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205866   71966 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.605705   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605736   71966 pod_ready.go:81] duration metric: took 399.849241ms for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.605745   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605754   71966 pod_ready.go:38] duration metric: took 1.29665644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:12.605776   71966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:04:12.620368   71966 ops.go:34] apiserver oom_adj: -16
	I0425 20:04:12.620397   71966 kubeadm.go:591] duration metric: took 10.956065292s to restartPrimaryControlPlane
	I0425 20:04:12.620405   71966 kubeadm.go:393] duration metric: took 11.025567867s to StartCluster
	I0425 20:04:12.620419   71966 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.620492   71966 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:04:12.623272   71966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.623577   71966 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:04:12.625335   71966 out.go:177] * Verifying Kubernetes components...
	I0425 20:04:12.623608   71966 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:04:12.623775   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:04:12.626619   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:12.626625   71966 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-512173"
	I0425 20:04:12.626642   71966 addons.go:69] Setting metrics-server=true in profile "embed-certs-512173"
	I0425 20:04:12.626664   71966 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-512173"
	W0425 20:04:12.626674   71966 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:04:12.626681   71966 addons.go:234] Setting addon metrics-server=true in "embed-certs-512173"
	W0425 20:04:12.626690   71966 addons.go:243] addon metrics-server should already be in state true
	I0425 20:04:12.626623   71966 addons.go:69] Setting default-storageclass=true in profile "embed-certs-512173"
	I0425 20:04:12.626709   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626714   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626718   71966 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-512173"
	I0425 20:04:12.626985   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627013   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627020   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627035   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627088   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627130   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.642680   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0425 20:04:12.642798   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0425 20:04:12.642972   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0425 20:04:12.643182   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643288   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643418   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643671   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643696   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643871   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643884   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643893   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643915   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.644227   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644235   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644403   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.644431   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644819   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.644942   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.644980   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.645022   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.647992   71966 addons.go:234] Setting addon default-storageclass=true in "embed-certs-512173"
	W0425 20:04:12.648011   71966 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:04:12.648045   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.648393   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.648429   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.660989   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41421
	I0425 20:04:12.661534   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.662561   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.662592   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.662614   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0425 20:04:12.662804   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0425 20:04:12.662947   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663016   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663116   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.663173   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663515   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663547   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663585   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663604   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663882   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663920   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.664096   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.664487   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.664506   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.665031   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.667087   71966 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:04:12.668326   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:04:12.668343   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:04:12.668361   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.666460   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.669907   71966 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:04:09.373628   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:11.376301   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.671391   71966 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.671411   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:04:12.671427   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.671566   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672113   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.672132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672233   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.672353   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.672439   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.672525   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.674511   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.674926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.674951   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.675178   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.675357   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.675505   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.675662   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.683720   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0425 20:04:12.684195   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.684736   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.684755   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.685100   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.685282   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.687009   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.687257   71966 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.687277   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:04:12.687325   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.689958   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690356   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.690374   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690446   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.690655   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.690841   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.690989   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.846840   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:12.865045   71966 node_ready.go:35] waiting up to 6m0s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:12.938848   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:04:12.938875   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:04:12.941038   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.959316   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.977813   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:04:12.977841   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:04:13.050586   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:13.050610   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:04:13.111207   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:14.253195   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.31212607s)
	I0425 20:04:14.253252   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253247   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.293897647s)
	I0425 20:04:14.253268   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253303   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253371   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253625   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253641   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253650   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253656   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253677   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253690   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253699   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253711   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253876   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254099   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253911   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253949   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253977   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254193   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.260565   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.260584   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.260830   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.260850   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.342979   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.231720554s)
	I0425 20:04:14.343042   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343349   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.343358   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343374   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343390   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343398   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343602   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343623   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343633   71966 addons.go:470] Verifying addon metrics-server=true in "embed-certs-512173"
	I0425 20:04:14.346631   71966 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:04:14.347936   71966 addons.go:505] duration metric: took 1.724328435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:04:14.869074   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.383960   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:13.384840   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.883656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.707816   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.207561   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.708264   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.207822   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.707509   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.207507   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.707899   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.208254   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.708246   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:17.207508   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.873212   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.873263   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:18.373183   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:16.870001   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:18.368960   71966 node_ready.go:49] node "embed-certs-512173" has status "Ready":"True"
	I0425 20:04:18.368991   71966 node_ready.go:38] duration metric: took 5.503919958s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:18.369003   71966 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:18.375440   71966 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380902   71966 pod_ready.go:92] pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.380920   71966 pod_ready.go:81] duration metric: took 5.456921ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380928   71966 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386330   71966 pod_ready.go:92] pod "etcd-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.386386   71966 pod_ready.go:81] duration metric: took 5.451019ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386402   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391115   71966 pod_ready.go:92] pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.391138   71966 pod_ready.go:81] duration metric: took 4.727835ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391149   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:20.398316   71966 pod_ready.go:102] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.885191   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:20.384439   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.707948   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.207953   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.707659   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.207609   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.707567   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.207989   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.707938   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.208305   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.707827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:22.207940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.374376   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.873180   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.899221   71966 pod_ready.go:92] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.899240   71966 pod_ready.go:81] duration metric: took 4.508083804s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.899250   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904904   71966 pod_ready.go:92] pod "kube-proxy-8247p" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.904922   71966 pod_ready.go:81] duration metric: took 5.665557ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904929   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910035   71966 pod_ready.go:92] pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.910051   71966 pod_ready.go:81] duration metric: took 5.116298ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910059   71966 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:24.919233   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.884480   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:25.384287   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.707381   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.207532   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.707461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.208239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.707742   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.208365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.707323   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.207485   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.707727   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:27.208332   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.373538   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.872428   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.420297   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.918808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.385722   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.883321   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.707275   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.207776   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.708096   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.207685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.708249   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.207647   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.707943   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.207471   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.707902   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:32.207582   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.872576   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.372818   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.416593   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:34.416976   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:31.884120   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:33.885341   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:35.886190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.708066   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.208090   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.707474   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.207664   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.708110   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.208160   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.707940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.207505   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.708334   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:37.207939   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.375813   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.873166   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.417945   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.916796   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.384530   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.384673   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:37.707256   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.207621   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.708237   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.208327   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.707542   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.207371   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.708300   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.207577   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.708097   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:42.207684   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.876272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:41.372217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.918223   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:43.420086   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.389390   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:44.885243   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.708257   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.207407   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.707548   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:43.707618   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:43.753656   72712 cri.go:89] found id: ""
	I0425 20:04:43.753686   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.753698   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:43.753706   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:43.753770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:43.797957   72712 cri.go:89] found id: ""
	I0425 20:04:43.797982   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.797991   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:43.797996   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:43.798051   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:43.836700   72712 cri.go:89] found id: ""
	I0425 20:04:43.836729   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.836737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:43.836742   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:43.836795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:43.883452   72712 cri.go:89] found id: ""
	I0425 20:04:43.883478   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.883486   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:43.883492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:43.883544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:43.929975   72712 cri.go:89] found id: ""
	I0425 20:04:43.930004   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.930014   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:43.930022   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:43.930089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:43.967648   72712 cri.go:89] found id: ""
	I0425 20:04:43.967681   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.967693   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:43.967701   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:43.967758   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:44.011024   72712 cri.go:89] found id: ""
	I0425 20:04:44.011048   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.011072   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:44.011078   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:44.011129   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:44.050233   72712 cri.go:89] found id: ""
	I0425 20:04:44.050263   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.050274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:44.050286   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:44.050302   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:44.196275   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:44.196307   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:44.196323   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:44.260707   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:44.260748   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:44.306051   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:44.306090   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:44.357643   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:44.357682   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:46.875982   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:46.890987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:46.891062   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:46.935855   72712 cri.go:89] found id: ""
	I0425 20:04:46.935878   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.935885   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:46.935891   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:46.935948   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:46.978634   72712 cri.go:89] found id: ""
	I0425 20:04:46.978662   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.978674   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:46.978681   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:46.978749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:47.019845   72712 cri.go:89] found id: ""
	I0425 20:04:47.019864   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.019872   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:47.019877   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:47.019933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:47.065002   72712 cri.go:89] found id: ""
	I0425 20:04:47.065040   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.065064   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:47.065072   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:47.065139   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:47.106370   72712 cri.go:89] found id: ""
	I0425 20:04:47.106404   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.106416   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:47.106423   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:47.106483   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:47.143851   72712 cri.go:89] found id: ""
	I0425 20:04:47.143874   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.143883   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:47.143888   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:47.143932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:47.186130   72712 cri.go:89] found id: ""
	I0425 20:04:47.186160   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.186168   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:47.186174   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:47.186238   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:47.228959   72712 cri.go:89] found id: ""
	I0425 20:04:47.228984   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.228992   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:47.229000   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:47.229010   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:47.299852   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:47.299893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:47.346078   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:47.346111   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:43.872670   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:46.373259   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:45.917948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.919494   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:50.420952   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.388353   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:49.884300   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.405897   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:47.405932   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:47.424426   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:47.424455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:47.506603   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.007697   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:50.023258   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:50.023333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:50.066794   72712 cri.go:89] found id: ""
	I0425 20:04:50.066827   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.066836   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:50.066842   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:50.066913   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:50.109167   72712 cri.go:89] found id: ""
	I0425 20:04:50.109200   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.109212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:50.109219   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:50.109306   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:50.151854   72712 cri.go:89] found id: ""
	I0425 20:04:50.151878   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.151886   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:50.151892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:50.151940   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:50.190600   72712 cri.go:89] found id: ""
	I0425 20:04:50.190632   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.190644   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:50.190672   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:50.190742   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:50.232851   72712 cri.go:89] found id: ""
	I0425 20:04:50.232874   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.232883   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:50.232889   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:50.232935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:50.274941   72712 cri.go:89] found id: ""
	I0425 20:04:50.274971   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.274983   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:50.274990   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:50.275069   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:50.320954   72712 cri.go:89] found id: ""
	I0425 20:04:50.320981   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.320992   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:50.320999   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:50.321068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:50.361799   72712 cri.go:89] found id: ""
	I0425 20:04:50.361829   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.361839   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:50.361847   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:50.361858   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:50.457792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.457819   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:50.457834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:50.539653   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:50.539702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:50.598740   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:50.598774   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:50.650501   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:50.650533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:48.872490   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.374484   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:52.919420   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:55.420126   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.887536   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:54.389174   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:53.167827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:53.183324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:53.183403   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:53.227598   72712 cri.go:89] found id: ""
	I0425 20:04:53.227641   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.227650   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:53.227655   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:53.227700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:53.271170   72712 cri.go:89] found id: ""
	I0425 20:04:53.271200   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.271212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:53.271220   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:53.271304   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:53.318185   72712 cri.go:89] found id: ""
	I0425 20:04:53.318233   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.318246   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:53.318255   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:53.318324   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:53.372199   72712 cri.go:89] found id: ""
	I0425 20:04:53.372228   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.372238   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:53.372244   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:53.372367   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:53.414048   72712 cri.go:89] found id: ""
	I0425 20:04:53.414080   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.414091   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:53.414099   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:53.414170   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:53.455746   72712 cri.go:89] found id: ""
	I0425 20:04:53.455806   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.455819   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:53.455827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:53.455901   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:53.497969   72712 cri.go:89] found id: ""
	I0425 20:04:53.497996   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.498004   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:53.498011   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:53.498057   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:53.543642   72712 cri.go:89] found id: ""
	I0425 20:04:53.543668   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.543675   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:53.543684   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:53.543693   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:53.596106   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:53.596144   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:53.612755   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:53.612787   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:53.693068   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:53.693089   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:53.693102   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:53.771499   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:53.771535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:56.322663   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:56.336866   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:56.336945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:56.375515   72712 cri.go:89] found id: ""
	I0425 20:04:56.375556   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.375567   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:56.375574   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:56.375641   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:56.423230   72712 cri.go:89] found id: ""
	I0425 20:04:56.423261   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.423273   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:56.423281   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:56.423366   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:56.467786   72712 cri.go:89] found id: ""
	I0425 20:04:56.467814   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.467835   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:56.467842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:56.467895   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:56.517671   72712 cri.go:89] found id: ""
	I0425 20:04:56.517696   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.517708   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:56.517715   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:56.517770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:56.558622   72712 cri.go:89] found id: ""
	I0425 20:04:56.558651   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.558662   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:56.558669   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:56.558746   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:56.601350   72712 cri.go:89] found id: ""
	I0425 20:04:56.601374   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.601382   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:56.601387   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:56.601444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:56.645892   72712 cri.go:89] found id: ""
	I0425 20:04:56.645923   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.645934   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:56.645940   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:56.646001   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:56.691619   72712 cri.go:89] found id: ""
	I0425 20:04:56.691645   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.691656   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:56.691665   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:56.691679   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:56.744854   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:56.744891   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:56.762523   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:56.762556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:56.843396   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:56.843422   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:56.843437   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:56.933785   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:56.933825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:53.872514   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.372956   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:58.373649   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:57.917208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.920979   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.884907   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.385506   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.481512   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:59.497510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:59.497588   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:59.547382   72712 cri.go:89] found id: ""
	I0425 20:04:59.547412   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.547423   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:59.547432   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:59.547486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:59.597671   72712 cri.go:89] found id: ""
	I0425 20:04:59.597699   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.597711   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:59.597717   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:59.597762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:59.641455   72712 cri.go:89] found id: ""
	I0425 20:04:59.641486   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.641497   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:59.641503   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:59.641613   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:59.685052   72712 cri.go:89] found id: ""
	I0425 20:04:59.685092   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.685104   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:59.685112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:59.685173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:59.735912   72712 cri.go:89] found id: ""
	I0425 20:04:59.735943   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.735951   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:59.735957   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:59.736025   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:59.799294   72712 cri.go:89] found id: ""
	I0425 20:04:59.799322   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.799332   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:59.799338   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:59.799395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:59.871270   72712 cri.go:89] found id: ""
	I0425 20:04:59.871297   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.871308   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:59.871315   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:59.871377   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:59.919001   72712 cri.go:89] found id: ""
	I0425 20:04:59.919091   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.919110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:59.919120   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:59.919135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:59.973458   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:59.973498   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:59.989729   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:59.989757   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:00.072887   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:00.072911   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:00.072926   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:00.153886   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:00.153921   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:00.873812   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.372969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.417960   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:04.420353   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:01.885238   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.887277   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:02.722771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:02.722831   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:02.770101   72712 cri.go:89] found id: ""
	I0425 20:05:02.770134   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.770147   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:02.770154   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:02.770224   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:02.817819   72712 cri.go:89] found id: ""
	I0425 20:05:02.817854   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.817865   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:02.817898   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:02.817963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:02.857036   72712 cri.go:89] found id: ""
	I0425 20:05:02.857066   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.857077   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:02.857085   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:02.857144   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:02.900112   72712 cri.go:89] found id: ""
	I0425 20:05:02.900145   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.900157   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:02.900164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:02.900221   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:02.941079   72712 cri.go:89] found id: ""
	I0425 20:05:02.941109   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.941116   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:02.941121   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:02.941198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:02.983458   72712 cri.go:89] found id: ""
	I0425 20:05:02.983490   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.983502   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:02.983510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:02.983574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:03.025424   72712 cri.go:89] found id: ""
	I0425 20:05:03.025451   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.025462   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:03.025469   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:03.025556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:03.065285   72712 cri.go:89] found id: ""
	I0425 20:05:03.065316   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.065328   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:03.065340   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:03.065351   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:03.121235   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:03.121267   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:03.138036   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:03.138073   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:03.213604   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:03.213638   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:03.213655   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:03.296696   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:03.296741   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.842642   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:05.859125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:05.859199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:05.906505   72712 cri.go:89] found id: ""
	I0425 20:05:05.906529   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.906537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:05.906542   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:05.906595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:05.950793   72712 cri.go:89] found id: ""
	I0425 20:05:05.950819   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.950831   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:05.950838   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:05.950902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:05.991612   72712 cri.go:89] found id: ""
	I0425 20:05:05.991644   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.991654   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:05.991661   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:05.991755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:06.032273   72712 cri.go:89] found id: ""
	I0425 20:05:06.032314   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.032326   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:06.032334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:06.032392   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:06.071802   72712 cri.go:89] found id: ""
	I0425 20:05:06.071833   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.071844   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:06.071852   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:06.071908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:06.116676   72712 cri.go:89] found id: ""
	I0425 20:05:06.116702   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.116710   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:06.116716   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:06.116759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:06.154720   72712 cri.go:89] found id: ""
	I0425 20:05:06.154753   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.154765   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:06.154771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:06.154842   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:06.196421   72712 cri.go:89] found id: ""
	I0425 20:05:06.196457   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.196469   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:06.196480   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:06.196493   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:06.251061   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:06.251122   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:06.267764   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:06.267799   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:06.345302   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:06.345334   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:06.345349   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:06.427836   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:06.427868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.873928   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.372014   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.422386   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.916659   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.384700   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.883611   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:10.885814   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.989442   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:09.004493   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:09.004551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:09.056062   72712 cri.go:89] found id: ""
	I0425 20:05:09.056086   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.056096   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:09.056101   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:09.056148   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:09.096791   72712 cri.go:89] found id: ""
	I0425 20:05:09.096817   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.096827   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:09.096834   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:09.096889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:09.134649   72712 cri.go:89] found id: ""
	I0425 20:05:09.134680   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.134691   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:09.134698   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:09.134757   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:09.175980   72712 cri.go:89] found id: ""
	I0425 20:05:09.176010   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.176021   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:09.176028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:09.176084   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:09.216263   72712 cri.go:89] found id: ""
	I0425 20:05:09.216299   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.216313   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:09.216325   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:09.216395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:09.260498   72712 cri.go:89] found id: ""
	I0425 20:05:09.260528   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.260538   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:09.260544   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:09.260603   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:09.303154   72712 cri.go:89] found id: ""
	I0425 20:05:09.303178   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.303201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:09.303209   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:09.303269   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:09.350798   72712 cri.go:89] found id: ""
	I0425 20:05:09.350829   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.350840   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:09.350852   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:09.350868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:09.405295   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:09.405332   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:09.422788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:09.422820   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:09.501819   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:09.501841   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:09.501855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:09.586938   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:09.586981   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:12.132731   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:12.148860   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:12.148935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:12.194021   72712 cri.go:89] found id: ""
	I0425 20:05:12.194051   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.194064   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:12.194072   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:12.194152   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:12.234680   72712 cri.go:89] found id: ""
	I0425 20:05:12.234710   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.234721   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:12.234728   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:12.234792   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:12.277751   72712 cri.go:89] found id: ""
	I0425 20:05:12.277783   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.277794   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:12.277802   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:12.277864   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:12.324068   72712 cri.go:89] found id: ""
	I0425 20:05:12.324100   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.324117   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:12.324125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:12.324187   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:10.374594   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.873217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:11.424208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.425980   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.387259   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.884337   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.366797   72712 cri.go:89] found id: ""
	I0425 20:05:12.366825   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.366837   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:12.366844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:12.366903   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:12.413092   72712 cri.go:89] found id: ""
	I0425 20:05:12.413120   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.413132   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:12.413139   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:12.413198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:12.461229   72712 cri.go:89] found id: ""
	I0425 20:05:12.461253   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.461262   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:12.461268   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:12.461333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:12.504646   72712 cri.go:89] found id: ""
	I0425 20:05:12.504669   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.504677   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:12.504685   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:12.504698   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:12.561630   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:12.561673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:12.578043   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:12.578069   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:12.655176   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:12.655195   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:12.655209   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:12.736323   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:12.736357   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.287503   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:15.302830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:15.302893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:15.339479   72712 cri.go:89] found id: ""
	I0425 20:05:15.339509   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.339519   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:15.339527   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:15.339589   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:15.381431   72712 cri.go:89] found id: ""
	I0425 20:05:15.381458   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.381467   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:15.381475   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:15.381537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:15.423729   72712 cri.go:89] found id: ""
	I0425 20:05:15.423755   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.423767   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:15.423774   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:15.423833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:15.464367   72712 cri.go:89] found id: ""
	I0425 20:05:15.464401   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.464413   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:15.464421   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:15.464489   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:15.508306   72712 cri.go:89] found id: ""
	I0425 20:05:15.508336   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.508348   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:15.508356   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:15.508419   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:15.548572   72712 cri.go:89] found id: ""
	I0425 20:05:15.548600   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.548610   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:15.548616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:15.548678   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:15.592885   72712 cri.go:89] found id: ""
	I0425 20:05:15.592914   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.592926   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:15.592933   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:15.592992   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:15.632817   72712 cri.go:89] found id: ""
	I0425 20:05:15.632855   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.632868   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:15.632880   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:15.632900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:15.648443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:15.648470   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:15.726167   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:15.726191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:15.726229   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:15.803028   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:15.803066   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.850519   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:15.850552   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:14.873291   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:17.372118   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.917932   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.420096   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.384555   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.885930   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.404671   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:18.422600   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:18.422663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:18.476977   72712 cri.go:89] found id: ""
	I0425 20:05:18.477001   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.477009   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:18.477021   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:18.477093   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:18.525595   72712 cri.go:89] found id: ""
	I0425 20:05:18.525631   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.525641   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:18.525648   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:18.525714   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:18.565485   72712 cri.go:89] found id: ""
	I0425 20:05:18.565513   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.565523   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:18.565531   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:18.565600   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:18.612059   72712 cri.go:89] found id: ""
	I0425 20:05:18.612096   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.612106   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:18.612112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:18.612173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:18.659407   72712 cri.go:89] found id: ""
	I0425 20:05:18.659438   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.659449   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:18.659456   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:18.659507   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:18.701065   72712 cri.go:89] found id: ""
	I0425 20:05:18.701092   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.701101   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:18.701106   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:18.701201   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:18.738234   72712 cri.go:89] found id: ""
	I0425 20:05:18.738264   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.738276   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:18.738284   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:18.738343   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:18.780460   72712 cri.go:89] found id: ""
	I0425 20:05:18.780489   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.780498   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:18.780514   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:18.780526   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:18.834345   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:18.834378   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:18.850006   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:18.850033   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:18.932146   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:18.932171   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:18.932185   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:19.015036   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:19.015068   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:21.568250   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:21.582519   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:21.582595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:21.622886   72712 cri.go:89] found id: ""
	I0425 20:05:21.622913   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.622920   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:21.622925   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:21.622974   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:21.664832   72712 cri.go:89] found id: ""
	I0425 20:05:21.664860   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.664874   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:21.664882   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:21.664950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:21.703801   72712 cri.go:89] found id: ""
	I0425 20:05:21.703829   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.703843   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:21.703850   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:21.703911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:21.741502   72712 cri.go:89] found id: ""
	I0425 20:05:21.741540   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.741549   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:21.741555   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:21.741612   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:21.783715   72712 cri.go:89] found id: ""
	I0425 20:05:21.783745   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.783754   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:21.783759   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:21.783803   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:21.822806   72712 cri.go:89] found id: ""
	I0425 20:05:21.822842   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.822851   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:21.822856   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:21.822915   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:21.864996   72712 cri.go:89] found id: ""
	I0425 20:05:21.865020   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.865030   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:21.865037   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:21.865092   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:21.907533   72712 cri.go:89] found id: ""
	I0425 20:05:21.907563   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.907575   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:21.907585   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:21.907601   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:21.964226   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:21.964260   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:21.980096   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:21.980123   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:22.059516   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:22.059539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:22.059566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:22.136752   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:22.136784   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:19.373290   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:21.873377   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.916720   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:22.917156   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.918191   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:23.384566   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:25.885793   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.682139   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:24.697495   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:24.697564   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:24.739725   72712 cri.go:89] found id: ""
	I0425 20:05:24.739750   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.739760   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:24.739766   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:24.739824   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:24.777455   72712 cri.go:89] found id: ""
	I0425 20:05:24.777485   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.777497   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:24.777504   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:24.777566   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:24.821729   72712 cri.go:89] found id: ""
	I0425 20:05:24.821761   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.821774   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:24.821782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:24.821845   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:24.861745   72712 cri.go:89] found id: ""
	I0425 20:05:24.861773   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.861784   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:24.861791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:24.861851   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:24.903441   72712 cri.go:89] found id: ""
	I0425 20:05:24.903470   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.903479   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:24.903486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:24.903544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:24.943589   72712 cri.go:89] found id: ""
	I0425 20:05:24.943618   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.943629   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:24.943637   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:24.943717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:24.983629   72712 cri.go:89] found id: ""
	I0425 20:05:24.983661   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.983672   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:24.983680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:24.983739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:25.022413   72712 cri.go:89] found id: ""
	I0425 20:05:25.022441   72712 logs.go:276] 0 containers: []
	W0425 20:05:25.022451   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:25.022462   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:25.022477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:25.077402   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:25.077438   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:25.094488   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:25.094517   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:25.171485   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:25.171515   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:25.171535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:25.251131   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:25.251166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:24.373762   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:26.873969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.420395   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:29.420994   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:28.384247   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:30.883795   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.797359   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:27.813601   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:27.813659   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:27.854017   72712 cri.go:89] found id: ""
	I0425 20:05:27.854051   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.854061   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:27.854066   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:27.854117   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:27.900425   72712 cri.go:89] found id: ""
	I0425 20:05:27.900451   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.900461   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:27.900468   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:27.900531   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:27.940064   72712 cri.go:89] found id: ""
	I0425 20:05:27.940096   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.940107   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:27.940114   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:27.940174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:27.979363   72712 cri.go:89] found id: ""
	I0425 20:05:27.979385   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.979393   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:27.979399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:27.979442   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:28.019702   72712 cri.go:89] found id: ""
	I0425 20:05:28.019723   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.019731   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:28.019736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:28.019798   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:28.058711   72712 cri.go:89] found id: ""
	I0425 20:05:28.058740   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.058748   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:28.058755   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:28.058810   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:28.104465   72712 cri.go:89] found id: ""
	I0425 20:05:28.104495   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.104507   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:28.104515   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:28.104577   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:28.142399   72712 cri.go:89] found id: ""
	I0425 20:05:28.142431   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.142440   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:28.142449   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:28.142460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:28.222763   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:28.222786   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:28.222801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:28.299797   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:28.299838   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:28.366569   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:28.366594   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:28.424581   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:28.424628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:30.942526   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:30.957400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:30.957482   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:30.996931   72712 cri.go:89] found id: ""
	I0425 20:05:30.996958   72712 logs.go:276] 0 containers: []
	W0425 20:05:30.996967   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:30.996974   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:30.997029   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:31.035673   72712 cri.go:89] found id: ""
	I0425 20:05:31.035700   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.035712   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:31.035719   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:31.035782   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:31.075783   72712 cri.go:89] found id: ""
	I0425 20:05:31.075809   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.075820   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:31.075826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:31.075886   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:31.114229   72712 cri.go:89] found id: ""
	I0425 20:05:31.114257   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.114267   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:31.114274   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:31.114333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:31.155385   72712 cri.go:89] found id: ""
	I0425 20:05:31.155409   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.155419   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:31.155427   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:31.155486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:31.193772   72712 cri.go:89] found id: ""
	I0425 20:05:31.193804   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.193815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:31.193823   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:31.193878   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:31.233886   72712 cri.go:89] found id: ""
	I0425 20:05:31.233909   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.233917   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:31.233923   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:31.233967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:31.273427   72712 cri.go:89] found id: ""
	I0425 20:05:31.273455   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.273465   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:31.273476   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:31.273491   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:31.354429   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:31.354462   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:31.406018   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:31.406047   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:31.460972   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:31.461007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:31.477485   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:31.477513   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:31.551616   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:29.371357   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.373007   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.421948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.424866   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.384577   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.884780   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:34.052808   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:34.068068   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:34.068158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:34.120984   72712 cri.go:89] found id: ""
	I0425 20:05:34.121016   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.121024   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:34.121032   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:34.121082   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:34.160646   72712 cri.go:89] found id: ""
	I0425 20:05:34.160676   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.160687   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:34.160694   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:34.160752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:34.202641   72712 cri.go:89] found id: ""
	I0425 20:05:34.202665   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.202671   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:34.202677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:34.202733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:34.244352   72712 cri.go:89] found id: ""
	I0425 20:05:34.244379   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.244391   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:34.244400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:34.244460   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:34.285858   72712 cri.go:89] found id: ""
	I0425 20:05:34.285885   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.285896   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:34.285904   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:34.285956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:34.323634   72712 cri.go:89] found id: ""
	I0425 20:05:34.323662   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.323673   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:34.323681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:34.323739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:34.365230   72712 cri.go:89] found id: ""
	I0425 20:05:34.365256   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.365272   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:34.365280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:34.365339   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:34.409329   72712 cri.go:89] found id: ""
	I0425 20:05:34.409354   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.409365   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:34.409376   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:34.409390   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:34.464575   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:34.464606   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:34.480244   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:34.480270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:34.560204   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:34.560224   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:34.560236   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:34.640152   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:34.640187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:37.189992   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:37.204683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:37.204786   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:37.245857   72712 cri.go:89] found id: ""
	I0425 20:05:37.245891   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.245903   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:37.245910   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:37.245969   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:37.284668   72712 cri.go:89] found id: ""
	I0425 20:05:37.284696   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.284704   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:37.284710   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:37.284762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:37.324349   72712 cri.go:89] found id: ""
	I0425 20:05:37.324379   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.324391   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:37.324399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:37.324461   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:33.872836   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.873214   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.373278   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.917308   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.419746   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.383933   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.385166   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:37.361764   72712 cri.go:89] found id: ""
	I0425 20:05:37.361787   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.361800   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:37.361811   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:37.361857   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:37.404331   72712 cri.go:89] found id: ""
	I0425 20:05:37.404353   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.404360   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:37.404366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:37.404430   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:37.445284   72712 cri.go:89] found id: ""
	I0425 20:05:37.445316   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.445327   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:37.445334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:37.445395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:37.483806   72712 cri.go:89] found id: ""
	I0425 20:05:37.483828   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.483837   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:37.483843   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:37.483888   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:37.524649   72712 cri.go:89] found id: ""
	I0425 20:05:37.524673   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.524680   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:37.524689   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:37.524701   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:37.581521   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:37.581553   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:37.598459   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:37.598487   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:37.671236   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:37.671256   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:37.671272   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:37.750517   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:37.750556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.293743   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:40.310344   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:40.310426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:40.356157   72712 cri.go:89] found id: ""
	I0425 20:05:40.356198   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.356208   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:40.356215   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:40.356277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:40.397857   72712 cri.go:89] found id: ""
	I0425 20:05:40.397886   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.397895   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:40.397902   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:40.397964   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:40.445034   72712 cri.go:89] found id: ""
	I0425 20:05:40.445057   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.445065   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:40.445071   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:40.445126   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:40.493744   72712 cri.go:89] found id: ""
	I0425 20:05:40.493773   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.493783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:40.493797   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:40.493856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:40.550546   72712 cri.go:89] found id: ""
	I0425 20:05:40.550572   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.550580   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:40.550587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:40.550654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:40.605122   72712 cri.go:89] found id: ""
	I0425 20:05:40.605153   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.605164   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:40.605172   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:40.605232   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:40.675713   72712 cri.go:89] found id: ""
	I0425 20:05:40.675745   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.675755   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:40.675769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:40.675828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:40.716064   72712 cri.go:89] found id: ""
	I0425 20:05:40.716093   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.716101   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:40.716109   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:40.716120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:40.781395   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:40.781441   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:40.797597   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:40.797628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:40.880931   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:40.880956   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:40.880971   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:40.970770   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:40.970800   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.373398   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.873163   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.918560   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.417610   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:45.420963   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.883556   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:44.883719   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.520389   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:43.537668   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:43.537729   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:43.578137   72712 cri.go:89] found id: ""
	I0425 20:05:43.578166   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.578175   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:43.578180   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:43.578247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:43.617428   72712 cri.go:89] found id: ""
	I0425 20:05:43.617454   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.617462   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:43.617466   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:43.617519   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:43.655401   72712 cri.go:89] found id: ""
	I0425 20:05:43.655431   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.655443   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:43.655450   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:43.655514   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:43.695183   72712 cri.go:89] found id: ""
	I0425 20:05:43.695212   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.695230   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:43.695238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:43.695316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:43.735056   72712 cri.go:89] found id: ""
	I0425 20:05:43.735086   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.735098   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:43.735104   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:43.735162   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:43.774761   72712 cri.go:89] found id: ""
	I0425 20:05:43.774789   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.774799   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:43.774830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:43.774889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:43.819102   72712 cri.go:89] found id: ""
	I0425 20:05:43.819128   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.819138   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:43.819146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:43.819206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:43.858235   72712 cri.go:89] found id: ""
	I0425 20:05:43.858267   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.858278   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:43.858289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:43.858303   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:43.940756   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:43.940794   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:43.985878   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:43.985925   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:44.040177   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:44.040207   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:44.055912   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:44.055942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:44.143724   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:46.643923   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:46.658863   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:46.658941   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:46.697826   72712 cri.go:89] found id: ""
	I0425 20:05:46.697850   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.697858   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:46.697884   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:46.697947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:46.739850   72712 cri.go:89] found id: ""
	I0425 20:05:46.739877   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.739888   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:46.739897   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:46.739955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:46.781212   72712 cri.go:89] found id: ""
	I0425 20:05:46.781241   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.781256   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:46.781262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:46.781321   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:46.826005   72712 cri.go:89] found id: ""
	I0425 20:05:46.826036   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.826047   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:46.826055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:46.826109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:46.865428   72712 cri.go:89] found id: ""
	I0425 20:05:46.865456   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.865465   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:46.865472   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:46.865522   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:46.914860   72712 cri.go:89] found id: ""
	I0425 20:05:46.914887   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.914897   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:46.914907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:46.914968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:46.955323   72712 cri.go:89] found id: ""
	I0425 20:05:46.955355   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.955365   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:46.955373   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:46.955436   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:46.999369   72712 cri.go:89] found id: ""
	I0425 20:05:46.999396   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.999408   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:46.999419   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:46.999464   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:47.013865   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:47.013893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:47.094725   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:47.094755   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:47.094771   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:47.178380   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:47.178426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:47.227217   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:47.227249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:45.375272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.872640   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.917579   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.918001   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:46.884746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:48.884818   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.780217   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:49.795690   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:49.795760   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:49.834909   72712 cri.go:89] found id: ""
	I0425 20:05:49.834935   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.834943   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:49.834951   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:49.835004   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:49.872717   72712 cri.go:89] found id: ""
	I0425 20:05:49.872747   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.872755   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:49.872762   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:49.872807   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:49.919348   72712 cri.go:89] found id: ""
	I0425 20:05:49.919376   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.919387   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:49.919395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:49.919465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:49.959673   72712 cri.go:89] found id: ""
	I0425 20:05:49.959705   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.959716   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:49.959728   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:49.959796   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:49.999276   72712 cri.go:89] found id: ""
	I0425 20:05:49.999299   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.999306   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:49.999312   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:49.999361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:50.037426   72712 cri.go:89] found id: ""
	I0425 20:05:50.037454   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.037461   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:50.037466   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:50.037510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:50.080666   72712 cri.go:89] found id: ""
	I0425 20:05:50.080695   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.080703   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:50.080719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:50.080776   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:50.126065   72712 cri.go:89] found id: ""
	I0425 20:05:50.126111   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.126123   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:50.126134   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:50.126148   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:50.140778   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:50.140805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:50.213282   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:50.213308   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:50.213320   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:50.293798   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:50.293832   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:50.336823   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:50.336859   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:49.873685   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.372830   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.919781   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:54.417518   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.382698   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:53.392894   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:55.884231   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.892579   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:52.909556   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:52.909629   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:52.948098   72712 cri.go:89] found id: ""
	I0425 20:05:52.948127   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.948138   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:52.948146   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:52.948206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:52.988813   72712 cri.go:89] found id: ""
	I0425 20:05:52.988840   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.988848   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:52.988853   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:52.988898   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:53.032181   72712 cri.go:89] found id: ""
	I0425 20:05:53.032211   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.032222   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:53.032230   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:53.032288   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:53.075496   72712 cri.go:89] found id: ""
	I0425 20:05:53.075528   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.075538   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:53.075543   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:53.075599   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:53.119037   72712 cri.go:89] found id: ""
	I0425 20:05:53.119070   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.119082   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:53.119095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:53.119158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:53.158276   72712 cri.go:89] found id: ""
	I0425 20:05:53.158303   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.158314   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:53.158321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:53.158381   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:53.196168   72712 cri.go:89] found id: ""
	I0425 20:05:53.196199   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.196211   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:53.196219   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:53.196277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:53.235212   72712 cri.go:89] found id: ""
	I0425 20:05:53.235235   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.235243   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:53.235250   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:53.235261   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:53.290435   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:53.290474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:53.306351   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:53.306380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:53.388623   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:53.388652   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:53.388666   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:53.480388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:53.480426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:56.027403   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:56.042683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:56.042755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:56.083672   72712 cri.go:89] found id: ""
	I0425 20:05:56.083706   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.083718   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:56.083725   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:56.083790   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:56.124071   72712 cri.go:89] found id: ""
	I0425 20:05:56.124105   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.124126   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:56.124134   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:56.124200   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:56.166692   72712 cri.go:89] found id: ""
	I0425 20:05:56.166724   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.166737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:56.166744   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:56.166808   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:56.203833   72712 cri.go:89] found id: ""
	I0425 20:05:56.203871   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.203884   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:56.203892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:56.203950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:56.242277   72712 cri.go:89] found id: ""
	I0425 20:05:56.242319   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.242341   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:56.242349   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:56.242416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:56.281697   72712 cri.go:89] found id: ""
	I0425 20:05:56.281726   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.281733   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:56.281739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:56.281812   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:56.322190   72712 cri.go:89] found id: ""
	I0425 20:05:56.322233   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.322243   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:56.322248   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:56.322310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:56.364831   72712 cri.go:89] found id: ""
	I0425 20:05:56.364853   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.364864   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:56.364875   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:56.364889   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:56.422824   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:56.422856   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:56.437619   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:56.437641   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:56.512938   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:56.512961   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:56.512977   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:56.598670   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:56.598708   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:54.872566   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.873184   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.917352   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.421645   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:58.383740   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:00.384113   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.150322   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:59.166883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:59.166956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:59.205086   72712 cri.go:89] found id: ""
	I0425 20:05:59.205112   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.205121   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:59.205126   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:59.205199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:59.253430   72712 cri.go:89] found id: ""
	I0425 20:05:59.253458   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.253469   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:59.253478   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:59.253539   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:59.293691   72712 cri.go:89] found id: ""
	I0425 20:05:59.293719   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.293731   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:59.293738   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:59.293801   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:59.331580   72712 cri.go:89] found id: ""
	I0425 20:05:59.331604   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.331613   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:59.331619   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:59.331663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:59.369985   72712 cri.go:89] found id: ""
	I0425 20:05:59.370012   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.370023   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:59.370031   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:59.370095   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:59.411636   72712 cri.go:89] found id: ""
	I0425 20:05:59.411662   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.411670   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:59.411676   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:59.411733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:59.454735   72712 cri.go:89] found id: ""
	I0425 20:05:59.454762   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.454774   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:59.454782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:59.454839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:59.497664   72712 cri.go:89] found id: ""
	I0425 20:05:59.497694   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.497704   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:59.497715   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:59.497731   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:59.556694   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:59.556728   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:59.572160   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:59.572187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:59.649040   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:59.649063   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:59.649083   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:59.727941   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:59.727975   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.275513   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:02.290486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:02.290557   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:02.332217   72712 cri.go:89] found id: ""
	I0425 20:06:02.332255   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.332273   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:02.332281   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:02.332357   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:58.873314   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.373601   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.916947   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.418479   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.384744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.885488   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.373346   72712 cri.go:89] found id: ""
	I0425 20:06:02.373370   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.373377   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:02.373382   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:02.373439   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:02.415835   72712 cri.go:89] found id: ""
	I0425 20:06:02.415861   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.415873   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:02.415881   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:02.415939   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:02.458876   72712 cri.go:89] found id: ""
	I0425 20:06:02.458905   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.458917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:02.458926   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:02.459008   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:02.502092   72712 cri.go:89] found id: ""
	I0425 20:06:02.502127   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.502138   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:02.502146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:02.502235   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:02.546357   72712 cri.go:89] found id: ""
	I0425 20:06:02.546383   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.546393   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:02.546399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:02.546459   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:02.586842   72712 cri.go:89] found id: ""
	I0425 20:06:02.586870   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.586881   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:02.586887   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:02.586932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:02.629305   72712 cri.go:89] found id: ""
	I0425 20:06:02.629339   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.629350   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:02.629360   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:02.629374   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.676583   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:02.676626   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:02.731790   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:02.731825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:02.747473   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:02.747499   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:02.824265   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:02.824289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:02.824304   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.408968   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:05.423645   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:05.423713   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:05.467402   72712 cri.go:89] found id: ""
	I0425 20:06:05.467425   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.467434   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:05.467445   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:05.467510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:05.503131   72712 cri.go:89] found id: ""
	I0425 20:06:05.503153   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.503161   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:05.503166   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:05.503216   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:05.545694   72712 cri.go:89] found id: ""
	I0425 20:06:05.545721   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.545732   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:05.545739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:05.545804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:05.585879   72712 cri.go:89] found id: ""
	I0425 20:06:05.585905   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.585912   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:05.585917   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:05.585963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:05.625520   72712 cri.go:89] found id: ""
	I0425 20:06:05.625549   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.625560   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:05.625567   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:05.625620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:05.664306   72712 cri.go:89] found id: ""
	I0425 20:06:05.664335   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.664345   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:05.664364   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:05.664437   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:05.705353   72712 cri.go:89] found id: ""
	I0425 20:06:05.705385   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.705397   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:05.705405   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:05.705468   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:05.743935   72712 cri.go:89] found id: ""
	I0425 20:06:05.743968   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.743977   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:05.743986   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:05.743997   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:05.801190   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:05.801234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:05.817046   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:05.817074   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:05.899413   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:05.899443   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:05.899458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.986303   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:05.986336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:03.872605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:05.876833   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.373392   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.916334   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.917480   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.887784   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:09.387085   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.531748   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:08.550667   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:08.550749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:08.594062   72712 cri.go:89] found id: ""
	I0425 20:06:08.594093   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.594102   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:08.594108   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:08.594163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:08.635823   72712 cri.go:89] found id: ""
	I0425 20:06:08.635861   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.635872   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:08.635880   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:08.635944   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:08.675338   72712 cri.go:89] found id: ""
	I0425 20:06:08.675383   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.675395   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:08.675402   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:08.675463   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:08.715971   72712 cri.go:89] found id: ""
	I0425 20:06:08.716001   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.716012   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:08.716019   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:08.716088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:08.758565   72712 cri.go:89] found id: ""
	I0425 20:06:08.758597   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.758608   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:08.758616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:08.758683   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:08.800179   72712 cri.go:89] found id: ""
	I0425 20:06:08.800207   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.800218   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:08.800226   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:08.800286   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:08.854603   72712 cri.go:89] found id: ""
	I0425 20:06:08.854639   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.854651   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:08.854659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:08.854724   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:08.904115   72712 cri.go:89] found id: ""
	I0425 20:06:08.904141   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.904152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:08.904162   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:08.904177   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:08.921826   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:08.921855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:09.003667   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:09.003687   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:09.003699   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:09.086301   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:09.086346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:09.138478   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:09.138516   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:11.704402   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:11.721810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:11.721902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:11.768790   72712 cri.go:89] found id: ""
	I0425 20:06:11.768829   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.768850   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:11.768858   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:11.768928   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:11.813543   72712 cri.go:89] found id: ""
	I0425 20:06:11.813576   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.813588   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:11.813595   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:11.813654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:11.853930   72712 cri.go:89] found id: ""
	I0425 20:06:11.853962   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.853972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:11.853980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:11.854044   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:11.900808   72712 cri.go:89] found id: ""
	I0425 20:06:11.900843   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.900853   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:11.900861   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:11.900919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:11.948850   72712 cri.go:89] found id: ""
	I0425 20:06:11.948876   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.948885   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:11.948890   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:11.948945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:11.989326   72712 cri.go:89] found id: ""
	I0425 20:06:11.989356   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.989365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:11.989371   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:11.989450   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:12.033912   72712 cri.go:89] found id: ""
	I0425 20:06:12.033943   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.033954   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:12.033959   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:12.034015   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:12.076170   72712 cri.go:89] found id: ""
	I0425 20:06:12.076199   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.076209   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:12.076217   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:12.076230   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:12.124851   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:12.124881   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:12.178927   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:12.178964   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:12.194925   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:12.194952   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:12.272163   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:12.272187   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:12.272202   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:10.374908   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.871613   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:10.917911   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.918144   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:15.419043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:11.886066   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.383880   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.851400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:14.869893   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:14.869967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:14.915793   72712 cri.go:89] found id: ""
	I0425 20:06:14.915820   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.915829   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:14.915836   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:14.915896   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:14.959549   72712 cri.go:89] found id: ""
	I0425 20:06:14.959576   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.959587   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:14.959606   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:14.959672   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:15.001420   72712 cri.go:89] found id: ""
	I0425 20:06:15.001453   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.001465   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:15.001474   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:15.001552   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:15.047960   72712 cri.go:89] found id: ""
	I0425 20:06:15.047988   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.047996   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:15.048001   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:15.048049   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:15.096688   72712 cri.go:89] found id: ""
	I0425 20:06:15.096722   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.096730   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:15.096736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:15.096795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:15.142673   72712 cri.go:89] found id: ""
	I0425 20:06:15.142701   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.142712   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:15.142719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:15.142784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:15.181729   72712 cri.go:89] found id: ""
	I0425 20:06:15.181757   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.181766   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:15.181773   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:15.181820   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:15.227858   72712 cri.go:89] found id: ""
	I0425 20:06:15.227886   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.227897   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:15.227905   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:15.227917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:15.283253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:15.283293   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:15.305572   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:15.305604   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:15.439587   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:15.439615   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:15.439631   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:15.525678   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:15.525714   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:14.872914   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.873605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:17.420065   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:19.917501   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.383915   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.883746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:20.884190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.078788   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:18.095012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:18.095083   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:18.136753   72712 cri.go:89] found id: ""
	I0425 20:06:18.136784   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.136796   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:18.136802   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:18.136850   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:18.184584   72712 cri.go:89] found id: ""
	I0425 20:06:18.184606   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.184614   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:18.184619   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:18.184691   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:18.228201   72712 cri.go:89] found id: ""
	I0425 20:06:18.228250   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.228263   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:18.228270   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:18.228326   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:18.267756   72712 cri.go:89] found id: ""
	I0425 20:06:18.267778   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.267786   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:18.267792   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:18.267855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:18.309727   72712 cri.go:89] found id: ""
	I0425 20:06:18.309755   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.309763   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:18.309769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:18.309827   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:18.350549   72712 cri.go:89] found id: ""
	I0425 20:06:18.350580   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.350592   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:18.350599   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:18.350656   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:18.393868   72712 cri.go:89] found id: ""
	I0425 20:06:18.393891   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.393902   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:18.393910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:18.393989   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:18.435163   72712 cri.go:89] found id: ""
	I0425 20:06:18.435195   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.435204   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:18.435211   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:18.435224   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:18.450871   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:18.450901   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:18.534501   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:18.534526   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:18.534538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:18.616979   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:18.617015   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:18.663568   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:18.663598   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.217744   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:21.235862   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:21.235955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:21.288966   72712 cri.go:89] found id: ""
	I0425 20:06:21.288996   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.289005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:21.289014   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:21.289075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:21.362068   72712 cri.go:89] found id: ""
	I0425 20:06:21.362092   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.362101   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:21.362108   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:21.362168   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:21.416870   72712 cri.go:89] found id: ""
	I0425 20:06:21.416894   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.416901   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:21.416907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:21.416956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:21.461465   72712 cri.go:89] found id: ""
	I0425 20:06:21.461495   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.461503   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:21.461508   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:21.461570   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:21.499985   72712 cri.go:89] found id: ""
	I0425 20:06:21.500014   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.500025   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:21.500032   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:21.500081   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:21.543725   72712 cri.go:89] found id: ""
	I0425 20:06:21.543764   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.543776   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:21.543784   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:21.543841   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:21.586535   72712 cri.go:89] found id: ""
	I0425 20:06:21.586566   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.586578   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:21.586587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:21.586644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:21.627885   72712 cri.go:89] found id: ""
	I0425 20:06:21.627912   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.627921   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:21.627929   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:21.627942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.685973   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:21.686006   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:21.702529   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:21.702556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:21.781634   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:21.781660   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:21.781673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:21.862986   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:21.863027   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:19.372142   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.374479   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.918699   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.419088   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:23.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:25.883438   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.413547   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:24.428247   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:24.428323   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:24.468708   72712 cri.go:89] found id: ""
	I0425 20:06:24.468757   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.468768   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:24.468775   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:24.468836   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:24.507667   72712 cri.go:89] found id: ""
	I0425 20:06:24.507694   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.507702   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:24.507708   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:24.507769   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:24.548537   72712 cri.go:89] found id: ""
	I0425 20:06:24.548562   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.548570   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:24.548576   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:24.548625   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:24.591240   72712 cri.go:89] found id: ""
	I0425 20:06:24.591264   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.591272   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:24.591280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:24.591325   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:24.631530   72712 cri.go:89] found id: ""
	I0425 20:06:24.631557   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.631568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:24.631575   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:24.631642   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:24.672878   72712 cri.go:89] found id: ""
	I0425 20:06:24.672903   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.672911   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:24.672916   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:24.672960   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:24.716168   72712 cri.go:89] found id: ""
	I0425 20:06:24.716193   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.716201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:24.716206   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:24.716256   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:24.758061   72712 cri.go:89] found id: ""
	I0425 20:06:24.758098   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.758110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:24.758122   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:24.758135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:24.839866   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:24.839900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:24.889288   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:24.889380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:24.946445   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:24.946488   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:24.963093   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:24.963126   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:25.044921   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:23.874297   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.372055   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.375436   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.916503   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.916669   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.887709   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.384645   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.545838   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:27.562659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:27.562717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:27.606462   72712 cri.go:89] found id: ""
	I0425 20:06:27.606491   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.606501   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:27.606509   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:27.606567   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:27.650475   72712 cri.go:89] found id: ""
	I0425 20:06:27.650505   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.650517   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:27.650524   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:27.650583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:27.695163   72712 cri.go:89] found id: ""
	I0425 20:06:27.695190   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.695201   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:27.695208   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:27.695265   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:27.741798   72712 cri.go:89] found id: ""
	I0425 20:06:27.741832   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.741842   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:27.741849   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:27.741904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:27.784146   72712 cri.go:89] found id: ""
	I0425 20:06:27.784175   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.784185   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:27.784193   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:27.784253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:27.827179   72712 cri.go:89] found id: ""
	I0425 20:06:27.827213   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.827225   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:27.827234   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:27.827298   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:27.872941   72712 cri.go:89] found id: ""
	I0425 20:06:27.872961   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.872980   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:27.872985   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:27.873040   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:27.917920   72712 cri.go:89] found id: ""
	I0425 20:06:27.917949   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.917959   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:27.917970   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:27.917985   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:27.971411   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:27.971455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:27.988704   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:27.988743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:28.064208   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:28.064229   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:28.064242   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:28.147388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:28.147427   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.694349   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:30.708595   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:30.708671   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:30.752963   72712 cri.go:89] found id: ""
	I0425 20:06:30.752994   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.753005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:30.753012   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:30.753073   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:30.795453   72712 cri.go:89] found id: ""
	I0425 20:06:30.795488   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.795498   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:30.795507   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:30.795574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:30.838945   72712 cri.go:89] found id: ""
	I0425 20:06:30.838970   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.838978   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:30.838984   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:30.839042   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:30.886128   72712 cri.go:89] found id: ""
	I0425 20:06:30.886160   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.886170   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:30.886178   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:30.886255   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:30.927773   72712 cri.go:89] found id: ""
	I0425 20:06:30.927805   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.927819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:30.927827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:30.927893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:30.968628   72712 cri.go:89] found id: ""
	I0425 20:06:30.968660   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.968672   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:30.968680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:30.968743   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:31.014590   72712 cri.go:89] found id: ""
	I0425 20:06:31.014616   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.014627   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:31.014634   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:31.014697   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:31.053236   72712 cri.go:89] found id: ""
	I0425 20:06:31.053262   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.053274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:31.053285   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:31.053301   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:31.107797   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:31.107834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:31.123675   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:31.123702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:31.201180   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:31.201204   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:31.201215   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:31.289474   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:31.289512   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.873981   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.373083   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.918572   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.420043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:35.421384   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:32.883164   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:34.883697   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.840828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:33.857736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:33.857795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:33.898621   72712 cri.go:89] found id: ""
	I0425 20:06:33.898647   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.898658   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:33.898665   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:33.898727   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:33.939211   72712 cri.go:89] found id: ""
	I0425 20:06:33.939234   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.939245   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:33.939250   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:33.939305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:33.981872   72712 cri.go:89] found id: ""
	I0425 20:06:33.981896   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.981903   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:33.981909   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:33.981965   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:34.027570   72712 cri.go:89] found id: ""
	I0425 20:06:34.027597   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.027609   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:34.027617   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:34.027675   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:34.072544   72712 cri.go:89] found id: ""
	I0425 20:06:34.072570   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.072586   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:34.072594   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:34.072674   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:34.119326   72712 cri.go:89] found id: ""
	I0425 20:06:34.119349   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.119358   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:34.119366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:34.119423   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:34.169618   72712 cri.go:89] found id: ""
	I0425 20:06:34.169642   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.169650   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:34.169655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:34.169705   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:34.213570   72712 cri.go:89] found id: ""
	I0425 20:06:34.213593   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.213601   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:34.213609   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:34.213621   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:34.255722   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:34.255756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:34.311113   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:34.311147   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:34.326869   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:34.326897   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:34.399765   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:34.399788   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:34.399801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:36.986610   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:37.003090   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:37.003163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:37.045929   72712 cri.go:89] found id: ""
	I0425 20:06:37.045956   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.045964   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:37.045969   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:37.046022   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:37.086835   72712 cri.go:89] found id: ""
	I0425 20:06:37.086868   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.086879   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:37.086885   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:37.086937   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:37.127454   72712 cri.go:89] found id: ""
	I0425 20:06:37.127479   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.127488   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:37.127494   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:37.127551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:37.168878   72712 cri.go:89] found id: ""
	I0425 20:06:37.168904   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.168917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:37.168924   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:37.168986   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:37.208859   72712 cri.go:89] found id: ""
	I0425 20:06:37.208889   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.208901   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:37.208914   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:37.208970   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:37.250407   72712 cri.go:89] found id: ""
	I0425 20:06:37.250439   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.250452   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:37.250467   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:37.250536   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:37.291004   72712 cri.go:89] found id: ""
	I0425 20:06:37.291040   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.291054   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:37.291063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:37.291125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:37.335573   72712 cri.go:89] found id: ""
	I0425 20:06:37.335597   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.335608   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:37.335619   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:37.335635   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:35.873065   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.371805   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.426152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:39.916340   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:36.884518   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.884859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.392773   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:37.392810   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:37.408311   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:37.408343   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:37.491376   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:37.491402   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:37.491416   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:37.574559   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:37.574600   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.125241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:40.142254   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:40.142347   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:40.186859   72712 cri.go:89] found id: ""
	I0425 20:06:40.186893   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.186904   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:40.186911   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:40.186972   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:40.229247   72712 cri.go:89] found id: ""
	I0425 20:06:40.229275   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.229288   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:40.229295   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:40.229361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:40.268853   72712 cri.go:89] found id: ""
	I0425 20:06:40.268879   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.268890   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:40.268897   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:40.268959   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:40.307621   72712 cri.go:89] found id: ""
	I0425 20:06:40.307650   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.307669   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:40.307677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:40.307732   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:40.351448   72712 cri.go:89] found id: ""
	I0425 20:06:40.351472   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.351484   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:40.351492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:40.351548   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:40.396771   72712 cri.go:89] found id: ""
	I0425 20:06:40.396804   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.396815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:40.396824   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:40.396890   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:40.443605   72712 cri.go:89] found id: ""
	I0425 20:06:40.443634   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.443642   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:40.443647   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:40.443694   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:40.495496   72712 cri.go:89] found id: ""
	I0425 20:06:40.495525   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.495536   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:40.495548   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:40.495563   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.539428   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:40.539457   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:40.596259   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:40.596305   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:40.613140   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:40.613167   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:40.701768   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:40.701793   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:40.701805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:40.372225   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:42.373541   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.916879   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.917783   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.386292   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.885441   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.294502   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:43.310041   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:43.310113   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:43.351841   72712 cri.go:89] found id: ""
	I0425 20:06:43.351864   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.351872   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:43.351877   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:43.351924   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:43.395467   72712 cri.go:89] found id: ""
	I0425 20:06:43.395497   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.395509   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:43.395516   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:43.395576   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:43.437256   72712 cri.go:89] found id: ""
	I0425 20:06:43.437354   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.437375   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:43.437384   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:43.437465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:43.480744   72712 cri.go:89] found id: ""
	I0425 20:06:43.480772   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.480783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:43.480791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:43.480839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:43.519916   72712 cri.go:89] found id: ""
	I0425 20:06:43.519951   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.519961   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:43.519975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:43.520039   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:43.557861   72712 cri.go:89] found id: ""
	I0425 20:06:43.557890   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.557901   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:43.557910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:43.557968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:43.594423   72712 cri.go:89] found id: ""
	I0425 20:06:43.594449   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.594458   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:43.594464   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:43.594512   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:43.632227   72712 cri.go:89] found id: ""
	I0425 20:06:43.632253   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.632262   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:43.632270   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:43.632281   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:43.688307   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:43.688336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:43.703382   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:43.703407   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:43.782073   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:43.782093   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:43.782109   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:43.872811   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:43.872842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:46.420420   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:46.435110   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:46.435174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:46.474019   72712 cri.go:89] found id: ""
	I0425 20:06:46.474044   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.474054   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:46.474067   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:46.474125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:46.517053   72712 cri.go:89] found id: ""
	I0425 20:06:46.517078   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.517088   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:46.517096   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:46.517150   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:46.560934   72712 cri.go:89] found id: ""
	I0425 20:06:46.560963   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.560972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:46.560977   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:46.561030   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:46.605969   72712 cri.go:89] found id: ""
	I0425 20:06:46.605997   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.606007   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:46.606012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:46.606061   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:46.647025   72712 cri.go:89] found id: ""
	I0425 20:06:46.647049   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.647058   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:46.647063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:46.647118   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:46.686931   72712 cri.go:89] found id: ""
	I0425 20:06:46.686956   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.686966   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:46.686975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:46.687053   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:46.727183   72712 cri.go:89] found id: ""
	I0425 20:06:46.727207   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.727216   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:46.727224   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:46.727277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:46.768030   72712 cri.go:89] found id: ""
	I0425 20:06:46.768059   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.768073   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:46.768085   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:46.768105   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:46.823400   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:46.823439   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:46.838443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:46.838468   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:46.919509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:46.919527   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:46.919538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:46.996250   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:46.996284   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:44.873706   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.874042   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:45.918619   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.418507   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.384559   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.884184   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.885081   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:49.542696   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:49.557346   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:49.557444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:49.595195   72712 cri.go:89] found id: ""
	I0425 20:06:49.595220   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.595231   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:49.595238   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:49.595305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:49.641324   72712 cri.go:89] found id: ""
	I0425 20:06:49.641354   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.641365   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:49.641373   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:49.641426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:49.681510   72712 cri.go:89] found id: ""
	I0425 20:06:49.681540   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.681552   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:49.681559   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:49.681620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:49.721482   72712 cri.go:89] found id: ""
	I0425 20:06:49.721509   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.721518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:49.721525   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:49.721581   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:49.762682   72712 cri.go:89] found id: ""
	I0425 20:06:49.762710   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.762723   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:49.762731   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:49.762793   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:49.801892   72712 cri.go:89] found id: ""
	I0425 20:06:49.801920   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.801932   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:49.801943   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:49.802002   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:49.840347   72712 cri.go:89] found id: ""
	I0425 20:06:49.840376   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.840387   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:49.840395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:49.840458   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:49.898486   72712 cri.go:89] found id: ""
	I0425 20:06:49.898516   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.898527   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:49.898536   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:49.898547   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:49.952735   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:49.952775   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:49.967986   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:49.968018   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:50.048003   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:50.048024   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:50.048040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:50.126062   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:50.126098   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:49.373031   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:51.873671   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.917641   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.418642   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.421542   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.384273   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.384393   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:52.679721   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:52.695636   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:52.695700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:52.738329   72712 cri.go:89] found id: ""
	I0425 20:06:52.738359   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.738368   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:52.738374   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:52.738420   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:52.779388   72712 cri.go:89] found id: ""
	I0425 20:06:52.779418   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.779426   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:52.779433   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:52.779496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:52.821105   72712 cri.go:89] found id: ""
	I0425 20:06:52.821137   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.821149   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:52.821168   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:52.821231   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:52.861781   72712 cri.go:89] found id: ""
	I0425 20:06:52.861817   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.861825   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:52.861831   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:52.861885   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:52.904602   72712 cri.go:89] found id: ""
	I0425 20:06:52.904633   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.904644   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:52.904651   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:52.904712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:52.951137   72712 cri.go:89] found id: ""
	I0425 20:06:52.951174   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.951183   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:52.951188   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:52.951234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:52.994199   72712 cri.go:89] found id: ""
	I0425 20:06:52.994249   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.994257   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:52.994262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:52.994315   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:53.031997   72712 cri.go:89] found id: ""
	I0425 20:06:53.032020   72712 logs.go:276] 0 containers: []
	W0425 20:06:53.032027   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:53.032035   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:53.032046   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:53.111351   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:53.111383   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:53.162470   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:53.162504   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:53.217188   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:53.217223   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:53.233071   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:53.233100   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:53.308983   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:55.809162   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:55.825185   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:55.825259   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:55.865963   72712 cri.go:89] found id: ""
	I0425 20:06:55.865989   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.866001   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:55.866009   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:55.866060   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:55.920565   72712 cri.go:89] found id: ""
	I0425 20:06:55.920601   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.920612   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:55.920620   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:55.920677   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:55.962643   72712 cri.go:89] found id: ""
	I0425 20:06:55.962669   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.962677   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:55.962684   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:55.962738   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:56.000737   72712 cri.go:89] found id: ""
	I0425 20:06:56.000764   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.000773   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:56.000782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:56.000828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:56.042226   72712 cri.go:89] found id: ""
	I0425 20:06:56.042251   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.042259   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:56.042265   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:56.042316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:56.080765   72712 cri.go:89] found id: ""
	I0425 20:06:56.080788   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.080798   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:56.080810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:56.080869   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:56.119563   72712 cri.go:89] found id: ""
	I0425 20:06:56.119590   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.119602   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:56.119608   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:56.119667   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:56.160136   72712 cri.go:89] found id: ""
	I0425 20:06:56.160162   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.160170   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:56.160179   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:56.160193   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:56.213506   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:56.213539   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:56.232121   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:56.232150   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:56.336606   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:56.336629   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:56.336640   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:56.426867   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:56.426908   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:54.374441   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:56.374847   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.916077   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.916521   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.384779   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.884281   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:58.975395   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:58.991064   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:58.991125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:59.031157   72712 cri.go:89] found id: ""
	I0425 20:06:59.031179   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.031190   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:59.031197   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:59.031253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:59.071893   72712 cri.go:89] found id: ""
	I0425 20:06:59.071923   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.071931   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:59.071937   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:59.071998   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:59.114714   72712 cri.go:89] found id: ""
	I0425 20:06:59.114749   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.114760   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:59.114768   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:59.114840   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:59.159482   72712 cri.go:89] found id: ""
	I0425 20:06:59.159510   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.159518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:59.159523   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:59.159575   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:59.201218   72712 cri.go:89] found id: ""
	I0425 20:06:59.201245   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.201253   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:59.201263   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:59.201312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:59.247277   72712 cri.go:89] found id: ""
	I0425 20:06:59.247305   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.247316   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:59.247324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:59.247379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:59.286713   72712 cri.go:89] found id: ""
	I0425 20:06:59.286738   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.286746   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:59.286751   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:59.286804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:59.332263   72712 cri.go:89] found id: ""
	I0425 20:06:59.332296   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.332320   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:59.332332   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:59.332346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:59.416446   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:59.416477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:59.462125   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:59.462166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:59.514881   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:59.514907   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:59.530109   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:59.530134   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:59.605820   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.106478   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:02.124859   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:02.124934   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:02.180491   72712 cri.go:89] found id: ""
	I0425 20:07:02.180526   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.180537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:02.180545   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:02.180601   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:02.237075   72712 cri.go:89] found id: ""
	I0425 20:07:02.237104   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.237118   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:02.237126   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:02.237190   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:02.295104   72712 cri.go:89] found id: ""
	I0425 20:07:02.295129   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.295140   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:02.295148   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:02.295210   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:02.335392   72712 cri.go:89] found id: ""
	I0425 20:07:02.335418   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.335428   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:02.335435   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:02.335496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:58.871748   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.372545   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.373424   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.917135   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.917504   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.885744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:04.385280   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:02.376964   72712 cri.go:89] found id: ""
	I0425 20:07:02.376990   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.377002   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:02.377009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:02.377066   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:02.415460   72712 cri.go:89] found id: ""
	I0425 20:07:02.415484   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.415491   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:02.415496   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:02.415550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:02.461946   72712 cri.go:89] found id: ""
	I0425 20:07:02.461972   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.461993   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:02.462009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:02.462075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:02.502829   72712 cri.go:89] found id: ""
	I0425 20:07:02.502851   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.502858   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:02.502866   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:02.502878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:02.558264   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:02.558296   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:02.574175   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:02.574225   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:02.649363   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.649389   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:02.649404   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:02.730528   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:02.730560   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.276648   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:05.292055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:05.292121   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:05.332849   72712 cri.go:89] found id: ""
	I0425 20:07:05.332874   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.332884   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:05.332892   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:05.332954   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:05.376446   72712 cri.go:89] found id: ""
	I0425 20:07:05.376475   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.376487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:05.376494   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:05.376556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:05.418635   72712 cri.go:89] found id: ""
	I0425 20:07:05.418664   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.418675   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:05.418682   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:05.418745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:05.459082   72712 cri.go:89] found id: ""
	I0425 20:07:05.459113   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.459123   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:05.459128   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:05.459175   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:05.498473   72712 cri.go:89] found id: ""
	I0425 20:07:05.498502   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.498514   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:05.498521   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:05.498578   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:05.543121   72712 cri.go:89] found id: ""
	I0425 20:07:05.543150   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.543159   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:05.543164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:05.543211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:05.585722   72712 cri.go:89] found id: ""
	I0425 20:07:05.585748   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.585758   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:05.585766   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:05.585826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:05.629614   72712 cri.go:89] found id: ""
	I0425 20:07:05.629647   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.629661   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:05.629671   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:05.629685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:05.683974   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:05.684007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:05.700651   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:05.700685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:05.782097   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:05.782127   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:05.782142   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:05.863881   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:05.863918   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.374553   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:07.872114   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.417080   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.417436   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:10.418259   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.885509   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:09.383078   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.412898   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:08.428152   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:08.428206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:08.468403   72712 cri.go:89] found id: ""
	I0425 20:07:08.468441   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.468455   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:08.468464   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:08.468529   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:08.511246   72712 cri.go:89] found id: ""
	I0425 20:07:08.511285   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.511297   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:08.511304   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:08.511363   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:08.553121   72712 cri.go:89] found id: ""
	I0425 20:07:08.553148   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.553155   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:08.553161   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:08.553214   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:08.589723   72712 cri.go:89] found id: ""
	I0425 20:07:08.589745   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.589755   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:08.589762   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:08.589826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:08.629502   72712 cri.go:89] found id: ""
	I0425 20:07:08.629525   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.629533   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:08.629538   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:08.629591   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:08.677107   72712 cri.go:89] found id: ""
	I0425 20:07:08.677144   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.677153   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:08.677164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:08.677212   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:08.716501   72712 cri.go:89] found id: ""
	I0425 20:07:08.716531   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.716542   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:08.716550   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:08.716611   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:08.763473   72712 cri.go:89] found id: ""
	I0425 20:07:08.763503   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.763515   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:08.763526   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:08.763543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:08.848961   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:08.848985   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:08.849000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:08.945851   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:08.945890   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:08.989429   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:08.989460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:09.042721   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:09.042756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.559400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:11.575100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:11.575180   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:11.613246   72712 cri.go:89] found id: ""
	I0425 20:07:11.613271   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.613284   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:11.613290   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:11.613351   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:11.655158   72712 cri.go:89] found id: ""
	I0425 20:07:11.655189   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.655200   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:11.655208   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:11.655266   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:11.695122   72712 cri.go:89] found id: ""
	I0425 20:07:11.695144   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.695151   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:11.695156   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:11.695205   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:11.735578   72712 cri.go:89] found id: ""
	I0425 20:07:11.735604   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.735615   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:11.735621   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:11.735680   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:11.774750   72712 cri.go:89] found id: ""
	I0425 20:07:11.774785   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.774795   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:11.774803   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:11.774855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:11.814878   72712 cri.go:89] found id: ""
	I0425 20:07:11.814908   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.814920   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:11.814939   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:11.815000   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:11.853262   72712 cri.go:89] found id: ""
	I0425 20:07:11.853295   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.853306   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:11.853313   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:11.853379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:11.897291   72712 cri.go:89] found id: ""
	I0425 20:07:11.897314   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.897324   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:11.897333   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:11.897348   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:11.956913   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:11.956945   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.973787   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:11.973821   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:12.055801   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:12.055826   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:12.055842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:12.140238   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:12.140270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:10.372634   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.374037   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.418299   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.919967   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:11.383994   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:13.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:15.884319   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.685296   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:14.699655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:14.699740   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:14.741907   72712 cri.go:89] found id: ""
	I0425 20:07:14.741936   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.741947   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:14.741955   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:14.742017   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:14.786457   72712 cri.go:89] found id: ""
	I0425 20:07:14.786479   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.786487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:14.786493   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:14.786537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:14.825010   72712 cri.go:89] found id: ""
	I0425 20:07:14.825042   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.825055   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:14.825063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:14.825124   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:14.874834   72712 cri.go:89] found id: ""
	I0425 20:07:14.874856   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.874867   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:14.874875   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:14.874933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:14.914636   72712 cri.go:89] found id: ""
	I0425 20:07:14.914674   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.914685   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:14.914693   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:14.914752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:14.959327   72712 cri.go:89] found id: ""
	I0425 20:07:14.959356   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.959365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:14.959372   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:14.959425   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:15.000637   72712 cri.go:89] found id: ""
	I0425 20:07:15.000666   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.000674   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:15.000680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:15.000728   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:15.040497   72712 cri.go:89] found id: ""
	I0425 20:07:15.040523   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.040531   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:15.040539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:15.040550   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:15.120206   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:15.120240   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:15.168292   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:15.168324   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:15.222133   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:15.222164   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:15.237719   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:15.237746   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:15.323404   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:14.872743   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.375231   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.420149   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:19.420277   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:18.384902   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:20.883469   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.823552   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:17.838837   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:17.838911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:17.880547   72712 cri.go:89] found id: ""
	I0425 20:07:17.880584   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.880595   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:17.880608   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:17.880669   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:17.929700   72712 cri.go:89] found id: ""
	I0425 20:07:17.929730   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.929742   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:17.929797   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:17.929861   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:17.974057   72712 cri.go:89] found id: ""
	I0425 20:07:17.974081   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.974088   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:17.974094   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:17.974142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:18.013173   72712 cri.go:89] found id: ""
	I0425 20:07:18.013200   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.013209   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:18.013215   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:18.013267   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:18.053525   72712 cri.go:89] found id: ""
	I0425 20:07:18.053557   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.053568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:18.053580   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:18.053644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:18.095972   72712 cri.go:89] found id: ""
	I0425 20:07:18.096004   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.096016   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:18.096024   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:18.096089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:18.136792   72712 cri.go:89] found id: ""
	I0425 20:07:18.136823   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.136834   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:18.136842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:18.136904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:18.176562   72712 cri.go:89] found id: ""
	I0425 20:07:18.176594   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.176605   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:18.176619   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:18.176634   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:18.254402   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:18.254440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:18.298075   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:18.298112   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:18.356091   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:18.356124   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:18.373788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:18.373822   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:18.452545   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:20.952752   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:20.972054   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:20.972133   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:21.015572   72712 cri.go:89] found id: ""
	I0425 20:07:21.015602   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.015613   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:21.015621   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:21.015689   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:21.053313   72712 cri.go:89] found id: ""
	I0425 20:07:21.053342   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.053352   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:21.053359   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:21.053422   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:21.090343   72712 cri.go:89] found id: ""
	I0425 20:07:21.090373   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.090384   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:21.090391   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:21.090472   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:21.127148   72712 cri.go:89] found id: ""
	I0425 20:07:21.127174   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.127184   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:21.127192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:21.127258   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:21.167175   72712 cri.go:89] found id: ""
	I0425 20:07:21.167199   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.167207   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:21.167212   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:21.167263   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:21.212740   72712 cri.go:89] found id: ""
	I0425 20:07:21.212771   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.212783   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:21.212791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:21.212856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:21.250751   72712 cri.go:89] found id: ""
	I0425 20:07:21.250774   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.250782   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:21.250788   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:21.250833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:21.292387   72712 cri.go:89] found id: ""
	I0425 20:07:21.292414   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.292426   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:21.292436   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:21.292451   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:21.337695   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:21.337726   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:21.395479   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:21.395520   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:21.411538   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:21.411564   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:21.493248   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:21.493270   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:21.493282   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:19.873680   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.372461   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:21.421770   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:23.426808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.883520   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.884554   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.076755   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:24.093549   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:24.093624   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:24.135660   72712 cri.go:89] found id: ""
	I0425 20:07:24.135686   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.135694   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:24.135705   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:24.135784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:24.179778   72712 cri.go:89] found id: ""
	I0425 20:07:24.179799   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.179807   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:24.179824   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:24.179883   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.226745   72712 cri.go:89] found id: ""
	I0425 20:07:24.226771   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.226780   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:24.226785   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:24.226839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:24.273302   72712 cri.go:89] found id: ""
	I0425 20:07:24.273327   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.273347   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:24.273354   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:24.273421   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:24.314117   72712 cri.go:89] found id: ""
	I0425 20:07:24.314149   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.314160   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:24.314167   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:24.314247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:24.353144   72712 cri.go:89] found id: ""
	I0425 20:07:24.353173   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.353184   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:24.353192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:24.353292   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:24.395899   72712 cri.go:89] found id: ""
	I0425 20:07:24.395925   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.395933   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:24.395938   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:24.395988   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:24.444470   72712 cri.go:89] found id: ""
	I0425 20:07:24.444503   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.444514   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:24.444525   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:24.444540   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:24.499845   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:24.499876   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:24.517421   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:24.517449   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:24.596509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:24.596530   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:24.596543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:24.710844   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:24.710878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.259541   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:27.275551   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:27.275609   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:27.314610   72712 cri.go:89] found id: ""
	I0425 20:07:27.314640   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.314651   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:27.314656   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:27.314712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:27.350100   72712 cri.go:89] found id: ""
	I0425 20:07:27.350132   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.350151   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:27.350158   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:27.350226   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.373886   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:26.873863   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:25.917794   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:28.417757   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:30.419922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.384565   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:29.385043   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.390197   72712 cri.go:89] found id: ""
	I0425 20:07:27.390238   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.390249   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:27.390257   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:27.390312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:27.431936   72712 cri.go:89] found id: ""
	I0425 20:07:27.431961   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.431973   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:27.431980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:27.432038   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:27.469175   72712 cri.go:89] found id: ""
	I0425 20:07:27.469204   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.469212   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:27.469218   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:27.469276   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:27.509385   72712 cri.go:89] found id: ""
	I0425 20:07:27.509416   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.509428   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:27.509436   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:27.509503   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:27.548997   72712 cri.go:89] found id: ""
	I0425 20:07:27.549034   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.549045   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:27.549052   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:27.549111   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:27.588925   72712 cri.go:89] found id: ""
	I0425 20:07:27.588959   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.588973   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:27.588985   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:27.589000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.635005   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:27.635040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:27.686587   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:27.686617   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:27.702913   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:27.702942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:27.775525   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:27.775551   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:27.775562   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.352358   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:30.367016   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:30.367088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:30.410878   72712 cri.go:89] found id: ""
	I0425 20:07:30.410906   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.410917   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:30.410927   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:30.410985   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:30.456150   72712 cri.go:89] found id: ""
	I0425 20:07:30.456173   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.456181   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:30.456186   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:30.456234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:30.495409   72712 cri.go:89] found id: ""
	I0425 20:07:30.495439   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.495450   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:30.495458   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:30.495516   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:30.535863   72712 cri.go:89] found id: ""
	I0425 20:07:30.535895   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.535906   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:30.535912   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:30.535971   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:30.573772   72712 cri.go:89] found id: ""
	I0425 20:07:30.573808   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.573819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:30.573826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:30.573892   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:30.626310   72712 cri.go:89] found id: ""
	I0425 20:07:30.626350   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.626362   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:30.626376   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:30.626438   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:30.666302   72712 cri.go:89] found id: ""
	I0425 20:07:30.666332   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.666343   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:30.666350   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:30.666413   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:30.703478   72712 cri.go:89] found id: ""
	I0425 20:07:30.703507   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.703519   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:30.703529   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:30.703543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:30.756532   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:30.756566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:30.772128   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:30.772158   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:30.853701   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:30.853728   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:30.853743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.935879   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:30.935917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:29.372219   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.872125   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:32.865998   72220 pod_ready.go:81] duration metric: took 4m0.000690329s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:32.866038   72220 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0425 20:07:32.866057   72220 pod_ready.go:38] duration metric: took 4m13.047288103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:32.866091   72220 kubeadm.go:591] duration metric: took 4m22.882679222s to restartPrimaryControlPlane
	W0425 20:07:32.866150   72220 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:32.866182   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:32.917319   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:35.421922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.886418   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.894776   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.483702   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:33.498238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:33.498310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:33.545696   72712 cri.go:89] found id: ""
	I0425 20:07:33.545723   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.545731   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:33.545737   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:33.545791   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:33.590808   72712 cri.go:89] found id: ""
	I0425 20:07:33.590837   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.590849   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:33.590857   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:33.590919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:33.634529   72712 cri.go:89] found id: ""
	I0425 20:07:33.634554   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.634562   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:33.634572   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:33.634640   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:33.679055   72712 cri.go:89] found id: ""
	I0425 20:07:33.679082   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.679093   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:33.679100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:33.679160   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:33.720653   72712 cri.go:89] found id: ""
	I0425 20:07:33.720686   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.720698   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:33.720706   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:33.720777   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:33.766163   72712 cri.go:89] found id: ""
	I0425 20:07:33.766221   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.766233   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:33.766241   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:33.766314   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:33.810804   72712 cri.go:89] found id: ""
	I0425 20:07:33.810830   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.810839   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:33.810844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:33.810908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:33.858109   72712 cri.go:89] found id: ""
	I0425 20:07:33.858140   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.858152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:33.858162   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:33.858176   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:33.926296   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:33.926333   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:33.944220   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:33.944249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:34.042119   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:34.042191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:34.042234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:34.143694   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:34.143732   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:36.691575   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:36.710408   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:36.710490   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:36.760097   72712 cri.go:89] found id: ""
	I0425 20:07:36.760135   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.760144   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:36.760150   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:36.760208   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:36.801508   72712 cri.go:89] found id: ""
	I0425 20:07:36.801532   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.801541   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:36.801546   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:36.801602   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:36.842293   72712 cri.go:89] found id: ""
	I0425 20:07:36.842328   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.842340   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:36.842355   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:36.842418   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:36.884101   72712 cri.go:89] found id: ""
	I0425 20:07:36.884131   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.884141   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:36.884149   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:36.884211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:36.925007   72712 cri.go:89] found id: ""
	I0425 20:07:36.925032   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.925039   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:36.925045   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:36.925109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:36.964975   72712 cri.go:89] found id: ""
	I0425 20:07:36.965009   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.965020   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:36.965028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:36.965088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:37.030956   72712 cri.go:89] found id: ""
	I0425 20:07:37.030987   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.030999   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:37.031007   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:37.031080   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:37.105919   72712 cri.go:89] found id: ""
	I0425 20:07:37.105946   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.105956   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:37.105967   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:37.105983   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:37.196376   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:37.196415   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:37.240296   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:37.240334   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:37.304336   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:37.304371   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:37.323146   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:37.323184   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:37.918245   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.418671   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:36.384384   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:38.387656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.883973   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	W0425 20:07:37.414563   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:39.915087   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:39.930987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:39.931068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:39.967641   72712 cri.go:89] found id: ""
	I0425 20:07:39.967682   72712 logs.go:276] 0 containers: []
	W0425 20:07:39.967693   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:39.967698   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:39.967755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:40.009924   72712 cri.go:89] found id: ""
	I0425 20:07:40.009951   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.009959   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:40.009969   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:40.010019   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:40.049644   72712 cri.go:89] found id: ""
	I0425 20:07:40.049675   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.049689   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:40.049697   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:40.049759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:40.090487   72712 cri.go:89] found id: ""
	I0425 20:07:40.090509   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.090519   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:40.090524   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:40.090583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:40.137634   72712 cri.go:89] found id: ""
	I0425 20:07:40.137664   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.137674   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:40.137681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:40.137745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:40.174832   72712 cri.go:89] found id: ""
	I0425 20:07:40.174863   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.174874   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:40.174882   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:40.174947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:40.212559   72712 cri.go:89] found id: ""
	I0425 20:07:40.212585   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.212593   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:40.212598   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:40.212687   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:40.253459   72712 cri.go:89] found id: ""
	I0425 20:07:40.253494   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.253506   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:40.253518   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:40.253533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:40.311253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:40.311288   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:40.326693   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:40.326722   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:40.405792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:40.405816   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:40.405831   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:40.486712   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:40.486749   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:42.419025   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:44.916387   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:41.387375   72304 pod_ready.go:81] duration metric: took 4m0.010411263s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:41.387396   72304 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:07:41.387402   72304 pod_ready.go:38] duration metric: took 4m6.083068398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:41.387414   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:07:41.387441   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:41.387498   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:41.459873   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:41.459899   72304 cri.go:89] found id: ""
	I0425 20:07:41.459907   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:41.459960   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.465470   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:41.465534   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:41.509504   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:41.509523   72304 cri.go:89] found id: ""
	I0425 20:07:41.509530   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:41.509584   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.515012   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:41.515070   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:41.562701   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:41.562727   72304 cri.go:89] found id: ""
	I0425 20:07:41.562737   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:41.562792   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.567856   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:41.567928   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:41.618411   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:41.618441   72304 cri.go:89] found id: ""
	I0425 20:07:41.618452   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:41.618510   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.625757   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:41.625826   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:41.672707   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:41.672734   72304 cri.go:89] found id: ""
	I0425 20:07:41.672741   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:41.672785   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.678040   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:41.678119   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:41.725172   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:41.725196   72304 cri.go:89] found id: ""
	I0425 20:07:41.725205   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:41.725264   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.730651   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:41.730718   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:41.777224   72304 cri.go:89] found id: ""
	I0425 20:07:41.777269   72304 logs.go:276] 0 containers: []
	W0425 20:07:41.777280   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:41.777290   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:41.777380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:41.821498   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:41.821524   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:41.821531   72304 cri.go:89] found id: ""
	I0425 20:07:41.821541   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:41.821599   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.827065   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.831900   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:41.831924   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:41.893198   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:41.893233   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:41.909141   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:41.909169   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:42.051260   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:42.051305   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:42.109173   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:42.109214   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:42.155862   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:42.155894   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:42.222430   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:42.222466   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:42.265323   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:42.265353   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:42.316534   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:42.316569   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:42.363543   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:42.363568   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:42.422389   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:42.422421   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:42.471230   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:42.471259   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.011223   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.011263   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:45.578411   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:45.597748   72304 api_server.go:72] duration metric: took 4m16.066757074s to wait for apiserver process to appear ...
	I0425 20:07:45.597777   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:07:45.597813   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:45.597861   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:45.649452   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:45.649481   72304 cri.go:89] found id: ""
	I0425 20:07:45.649491   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:45.649534   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.654965   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:45.655023   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:45.701151   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:45.701177   72304 cri.go:89] found id: ""
	I0425 20:07:45.701186   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:45.701238   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.706702   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:45.706767   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:45.763142   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:45.763167   72304 cri.go:89] found id: ""
	I0425 20:07:45.763177   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:45.763220   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.768626   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:45.768684   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:45.816615   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:45.816648   72304 cri.go:89] found id: ""
	I0425 20:07:45.816656   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:45.816701   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.822714   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:45.822790   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:45.875652   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:45.875678   72304 cri.go:89] found id: ""
	I0425 20:07:45.875688   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:45.875737   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.881649   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:45.881719   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:45.930631   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:45.930656   72304 cri.go:89] found id: ""
	I0425 20:07:45.930666   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:45.930721   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.939712   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:45.939783   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:45.984646   72304 cri.go:89] found id: ""
	I0425 20:07:45.984684   72304 logs.go:276] 0 containers: []
	W0425 20:07:45.984693   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:45.984699   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:45.984754   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:46.029752   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.029777   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.029782   72304 cri.go:89] found id: ""
	I0425 20:07:46.029789   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:46.029845   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.035189   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.040479   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:46.040503   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:46.101469   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:46.101509   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:46.167362   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:46.167401   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:46.217732   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:46.217759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:46.264372   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:46.264404   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:43.037730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:43.064471   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:43.064550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:43.130075   72712 cri.go:89] found id: ""
	I0425 20:07:43.130111   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.130129   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:43.130136   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:43.130195   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:43.169628   72712 cri.go:89] found id: ""
	I0425 20:07:43.169663   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.169675   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:43.169682   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:43.169748   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:43.214845   72712 cri.go:89] found id: ""
	I0425 20:07:43.214869   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.214877   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:43.214883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:43.214929   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:43.263047   72712 cri.go:89] found id: ""
	I0425 20:07:43.263069   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.263078   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:43.263083   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:43.263142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:43.313179   72712 cri.go:89] found id: ""
	I0425 20:07:43.313213   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.313223   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:43.313231   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:43.313295   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:43.353440   72712 cri.go:89] found id: ""
	I0425 20:07:43.353468   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.353480   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:43.353488   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:43.353546   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:43.392261   72712 cri.go:89] found id: ""
	I0425 20:07:43.392288   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.392296   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:43.392321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:43.392378   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:43.431111   72712 cri.go:89] found id: ""
	I0425 20:07:43.431139   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.431147   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:43.431155   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:43.431165   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:43.485087   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:43.485120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:43.501508   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:43.501536   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:43.586041   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:43.586073   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:43.586089   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.663194   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.663232   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:46.218461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:46.233195   72712 kubeadm.go:591] duration metric: took 4m4.06065248s to restartPrimaryControlPlane
	W0425 20:07:46.233281   72712 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:46.233311   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:48.166680   72712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.933342568s)
	I0425 20:07:48.166771   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:48.185391   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:07:48.198250   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:07:48.209825   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:07:48.209843   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:07:48.209897   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:07:48.220854   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:07:48.220909   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:07:48.231518   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:07:48.241515   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:07:48.241589   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:07:48.251764   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.261762   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:07:48.261813   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.271952   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:07:48.281914   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:07:48.281986   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:07:48.292879   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:07:48.372322   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:07:48.372460   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:07:48.529730   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:07:48.529854   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:07:48.529979   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:07:48.753171   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:07:48.755473   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:07:48.755590   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:07:48.755692   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:07:48.755809   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:07:48.755905   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:07:48.756132   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:07:48.756317   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:07:48.756867   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:07:48.757498   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:07:48.758073   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:07:48.758581   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:07:48.758745   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:07:48.758842   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:07:48.894873   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:07:48.946907   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:07:49.084938   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:07:49.201925   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:07:49.219675   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:07:49.220891   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:07:49.220951   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:07:49.387310   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:07:46.917886   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:48.919793   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:46.324627   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:46.324653   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:46.382068   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:46.382102   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.424672   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:46.424709   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.466659   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:46.466692   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:46.484868   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:46.484898   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:46.614688   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:46.614720   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:46.666805   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:46.666846   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:47.098854   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:47.098899   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:49.653042   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:07:49.657843   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:07:49.659251   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:07:49.659285   72304 api_server.go:131] duration metric: took 4.061499319s to wait for apiserver health ...
	I0425 20:07:49.659295   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:07:49.659321   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:49.659380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:49.709699   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:49.709721   72304 cri.go:89] found id: ""
	I0425 20:07:49.709729   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:49.709795   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.715369   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:49.715429   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:49.773517   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:49.773544   72304 cri.go:89] found id: ""
	I0425 20:07:49.773554   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:49.773617   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.778984   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:49.779071   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:49.825707   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:49.825739   72304 cri.go:89] found id: ""
	I0425 20:07:49.825746   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:49.825790   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.830613   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:49.830678   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:49.872068   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:49.872094   72304 cri.go:89] found id: ""
	I0425 20:07:49.872104   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:49.872166   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.877311   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:49.877383   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:49.930182   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:49.930216   72304 cri.go:89] found id: ""
	I0425 20:07:49.930228   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:49.930283   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.935415   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:49.935484   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:49.985377   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:49.985404   72304 cri.go:89] found id: ""
	I0425 20:07:49.985412   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:49.985469   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.991021   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:49.991092   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:50.037755   72304 cri.go:89] found id: ""
	I0425 20:07:50.037787   72304 logs.go:276] 0 containers: []
	W0425 20:07:50.037802   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:50.037811   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:50.037875   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:50.083706   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.083731   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.083735   72304 cri.go:89] found id: ""
	I0425 20:07:50.083742   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:50.083793   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.088730   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.094339   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:50.094371   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:50.161538   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:50.161573   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.204178   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:50.204211   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.251315   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:50.251344   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:50.315859   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:50.315886   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:50.367787   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:50.367829   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:50.429509   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:50.429541   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:50.488723   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:50.488759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:50.506838   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:50.506879   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:50.629496   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:50.629526   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:50.689286   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:50.689321   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:50.731343   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:50.731373   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:50.772085   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:50.772114   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:49.389887   72712 out.go:204]   - Booting up control plane ...
	I0425 20:07:49.390011   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:07:49.395060   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:07:49.398108   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:07:49.398220   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:07:49.402596   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:07:53.651817   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:07:53.651845   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.651850   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.651854   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.651859   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.651862   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.651865   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.651872   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.651878   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.651885   72304 system_pods.go:74] duration metric: took 3.992584481s to wait for pod list to return data ...
	I0425 20:07:53.651892   72304 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:07:53.654617   72304 default_sa.go:45] found service account: "default"
	I0425 20:07:53.654641   72304 default_sa.go:55] duration metric: took 2.742232ms for default service account to be created ...
	I0425 20:07:53.654649   72304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:07:53.660082   72304 system_pods.go:86] 8 kube-system pods found
	I0425 20:07:53.660110   72304 system_pods.go:89] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.660116   72304 system_pods.go:89] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.660121   72304 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.660127   72304 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.660131   72304 system_pods.go:89] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.660135   72304 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.660142   72304 system_pods.go:89] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.660148   72304 system_pods.go:89] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.660154   72304 system_pods.go:126] duration metric: took 5.50043ms to wait for k8s-apps to be running ...
	I0425 20:07:53.660161   72304 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:07:53.660201   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:53.677461   72304 system_svc.go:56] duration metric: took 17.289854ms WaitForService to wait for kubelet
	I0425 20:07:53.677499   72304 kubeadm.go:576] duration metric: took 4m24.146512306s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:07:53.677524   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:07:53.681527   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:07:53.681562   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:07:53.681576   72304 node_conditions.go:105] duration metric: took 4.045221ms to run NodePressure ...
	I0425 20:07:53.681591   72304 start.go:240] waiting for startup goroutines ...
	I0425 20:07:53.681605   72304 start.go:245] waiting for cluster config update ...
	I0425 20:07:53.681622   72304 start.go:254] writing updated cluster config ...
	I0425 20:07:53.682002   72304 ssh_runner.go:195] Run: rm -f paused
	I0425 20:07:53.732056   72304 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:07:53.734302   72304 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142196" cluster and "default" namespace by default
	I0425 20:07:51.419808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:53.916090   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:55.917139   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:58.417609   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:00.917152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:02.918628   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.419508   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.765908   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.899694836s)
	I0425 20:08:05.765989   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:05.787711   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:08:05.801717   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:08:05.813710   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:08:05.813741   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:08:05.813802   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:08:05.825122   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:08:05.825202   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:08:05.837118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:08:05.848807   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:08:05.848880   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:08:05.862028   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.873795   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:08:05.873919   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.885577   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:08:05.897605   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:08:05.897685   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:08:05.909284   72220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:08:05.965574   72220 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 20:08:05.965663   72220 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:08:06.133359   72220 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:08:06.133525   72220 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:08:06.133675   72220 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:08:06.391437   72220 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:08:06.393805   72220 out.go:204]   - Generating certificates and keys ...
	I0425 20:08:06.393905   72220 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:08:06.393994   72220 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:08:06.394121   72220 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:08:06.394237   72220 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:08:06.394332   72220 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:08:06.394417   72220 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:08:06.394514   72220 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:08:06.396093   72220 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:08:06.396202   72220 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:08:06.396300   72220 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:08:06.396358   72220 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:08:06.396423   72220 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:08:06.683452   72220 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:08:06.778456   72220 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 20:08:06.923709   72220 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:08:07.079685   72220 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:08:07.170533   72220 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:08:07.171070   72220 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:08:07.173798   72220 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:08:07.175699   72220 out.go:204]   - Booting up control plane ...
	I0425 20:08:07.175824   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:08:07.175924   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:08:07.176060   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:08:07.197685   72220 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:08:07.200579   72220 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:08:07.200645   72220 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:08:07.354665   72220 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 20:08:07.354779   72220 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 20:08:07.855900   72220 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.56346ms
	I0425 20:08:07.856015   72220 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 20:08:07.423114   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:09.425115   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:13.358654   72220 kubeadm.go:309] [api-check] The API server is healthy after 5.502458238s
	I0425 20:08:13.388381   72220 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 20:08:13.908867   72220 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 20:08:13.945417   72220 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 20:08:13.945708   72220 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-744552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 20:08:13.959901   72220 kubeadm.go:309] [bootstrap-token] Using token: r2mxoe.iuelddsr8gvoq1wo
	I0425 20:08:13.961409   72220 out.go:204]   - Configuring RBAC rules ...
	I0425 20:08:13.961552   72220 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 20:08:13.970435   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 20:08:13.978933   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 20:08:13.982503   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 20:08:13.987029   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 20:08:13.990969   72220 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 20:08:14.103051   72220 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 20:08:14.554715   72220 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 20:08:15.105951   72220 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 20:08:15.107134   72220 kubeadm.go:309] 
	I0425 20:08:15.107222   72220 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 20:08:15.107236   72220 kubeadm.go:309] 
	I0425 20:08:15.107336   72220 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 20:08:15.107349   72220 kubeadm.go:309] 
	I0425 20:08:15.107379   72220 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 20:08:15.107463   72220 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 20:08:15.107550   72220 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 20:08:15.107560   72220 kubeadm.go:309] 
	I0425 20:08:15.107657   72220 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 20:08:15.107668   72220 kubeadm.go:309] 
	I0425 20:08:15.107735   72220 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 20:08:15.107747   72220 kubeadm.go:309] 
	I0425 20:08:15.107807   72220 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 20:08:15.107935   72220 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 20:08:15.108030   72220 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 20:08:15.108042   72220 kubeadm.go:309] 
	I0425 20:08:15.108154   72220 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 20:08:15.108269   72220 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 20:08:15.108280   72220 kubeadm.go:309] 
	I0425 20:08:15.108395   72220 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.108556   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed \
	I0425 20:08:15.108594   72220 kubeadm.go:309] 	--control-plane 
	I0425 20:08:15.108603   72220 kubeadm.go:309] 
	I0425 20:08:15.108719   72220 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 20:08:15.108730   72220 kubeadm.go:309] 
	I0425 20:08:15.108849   72220 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.109004   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed 
	I0425 20:08:15.109717   72220 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:08:15.109778   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:08:15.109797   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:08:15.111712   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:08:11.918414   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:14.420753   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:15.113288   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:08:15.129693   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:08:15.157631   72220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:08:15.157709   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.157760   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-744552 minikube.k8s.io/updated_at=2024_04_25T20_08_15_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=no-preload-744552 minikube.k8s.io/primary=true
	I0425 20:08:15.374198   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.418592   72220 ops.go:34] apiserver oom_adj: -16
	I0425 20:08:15.874721   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.374969   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.875091   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.375038   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.874685   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:18.374802   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.917617   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:19.421721   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:18.874931   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.374961   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.874349   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.374787   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.875130   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.374959   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.874325   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.374798   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.875034   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:23.374899   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.917898   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:22.917132   71966 pod_ready.go:81] duration metric: took 4m0.007062693s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:08:22.917156   71966 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:08:22.917164   71966 pod_ready.go:38] duration metric: took 4m4.548150095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:22.917179   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:22.917211   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:22.917270   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:22.982604   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:22.982631   71966 cri.go:89] found id: ""
	I0425 20:08:22.982640   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:22.982698   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:22.988558   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:22.988618   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:23.031937   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.031964   71966 cri.go:89] found id: ""
	I0425 20:08:23.031973   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:23.032031   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.037315   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:23.037371   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:23.089839   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.089862   71966 cri.go:89] found id: ""
	I0425 20:08:23.089872   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:23.089936   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.095247   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:23.095309   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:23.136257   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.136286   71966 cri.go:89] found id: ""
	I0425 20:08:23.136294   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:23.136357   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.142548   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:23.142608   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:23.186190   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.186229   71966 cri.go:89] found id: ""
	I0425 20:08:23.186239   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:23.186301   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.191422   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:23.191494   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:23.242326   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.242361   71966 cri.go:89] found id: ""
	I0425 20:08:23.242371   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:23.242437   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.248578   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:23.248642   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:23.286781   71966 cri.go:89] found id: ""
	I0425 20:08:23.286807   71966 logs.go:276] 0 containers: []
	W0425 20:08:23.286817   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:23.286823   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:23.286885   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:23.334728   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:23.334754   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.334761   71966 cri.go:89] found id: ""
	I0425 20:08:23.334770   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:23.334831   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.340288   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.344787   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:23.344808   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:23.401830   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:23.401865   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:23.425683   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:23.425715   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:23.568527   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:23.568558   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.608747   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:23.608776   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.647962   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:23.647996   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.687270   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:23.687308   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:23.745081   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:23.745112   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:23.799375   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:23.799405   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.853199   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:23.853232   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.896535   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:23.896571   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.964317   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:23.964350   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:24.013196   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:24.013231   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:23.874275   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.374250   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.874396   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.374767   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.874968   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.374333   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.874916   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.374369   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.499044   72220 kubeadm.go:1107] duration metric: took 12.341393953s to wait for elevateKubeSystemPrivileges
	W0425 20:08:27.499078   72220 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 20:08:27.499087   72220 kubeadm.go:393] duration metric: took 5m17.572541498s to StartCluster
	I0425 20:08:27.499108   72220 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.499189   72220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:08:27.500940   72220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.501192   72220 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:08:27.503257   72220 out.go:177] * Verifying Kubernetes components...
	I0425 20:08:27.501308   72220 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:08:27.501405   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:08:27.505389   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:08:27.505403   72220 addons.go:69] Setting storage-provisioner=true in profile "no-preload-744552"
	I0425 20:08:27.505438   72220 addons.go:234] Setting addon storage-provisioner=true in "no-preload-744552"
	W0425 20:08:27.505453   72220 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:08:27.505490   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505505   72220 addons.go:69] Setting metrics-server=true in profile "no-preload-744552"
	I0425 20:08:27.505535   72220 addons.go:234] Setting addon metrics-server=true in "no-preload-744552"
	W0425 20:08:27.505546   72220 addons.go:243] addon metrics-server should already be in state true
	I0425 20:08:27.505574   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505895   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.505922   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.505492   72220 addons.go:69] Setting default-storageclass=true in profile "no-preload-744552"
	I0425 20:08:27.505990   72220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-744552"
	I0425 20:08:27.505952   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506099   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.506418   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506467   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.523666   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0425 20:08:27.526950   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0425 20:08:27.526972   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.526981   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I0425 20:08:27.527536   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527606   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527662   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.527683   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528039   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528059   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528122   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528228   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528242   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528601   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528644   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528712   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.528735   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.528800   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.529228   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.529246   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.532151   72220 addons.go:234] Setting addon default-storageclass=true in "no-preload-744552"
	W0425 20:08:27.532171   72220 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:08:27.532204   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.532543   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.532582   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.547165   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0425 20:08:27.547700   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.548354   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.548368   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.548675   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.548793   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.550640   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.554301   72220 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:08:27.553061   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0425 20:08:27.553099   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0425 20:08:27.555613   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:08:27.555630   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:08:27.555652   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.556177   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556181   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556724   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556739   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.556868   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556879   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.557128   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.557700   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.557729   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.558142   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.558406   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.559420   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.559990   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.560057   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.560076   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.560177   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.560333   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.560549   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.560967   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.562839   72220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:08:27.564442   72220 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.564480   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:08:27.564517   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.567912   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.568153   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.568171   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.570321   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.570514   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.570709   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.570945   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.578396   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
	I0425 20:08:27.586629   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.587070   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.587082   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.587584   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.587736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.589708   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.589937   72220 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.589948   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:08:27.589961   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.592640   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.592983   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.593007   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.593261   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.593541   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.593736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.593906   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.783858   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:08:27.820917   72220 node_ready.go:35] waiting up to 6m0s for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832349   72220 node_ready.go:49] node "no-preload-744552" has status "Ready":"True"
	I0425 20:08:27.832377   72220 node_ready.go:38] duration metric: took 11.423909ms for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832390   72220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:27.844475   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:27.886461   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:08:27.886483   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:08:27.899413   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.931511   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.935073   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:08:27.935098   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:08:27.989052   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:27.989082   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:08:28.016326   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:28.551863   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551894   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.551964   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551976   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552255   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552280   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552292   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552315   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552358   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.552397   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552405   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552414   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552421   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552571   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552597   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552710   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552736   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.578416   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.578445   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.578730   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.578776   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.578789   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.945831   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.945861   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946170   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946191   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946214   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.946224   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946531   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946549   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946560   72220 addons.go:470] Verifying addon metrics-server=true in "no-preload-744552"
	I0425 20:08:28.946570   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.948485   72220 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:08:27.005360   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:27.024856   71966 api_server.go:72] duration metric: took 4m14.401244231s to wait for apiserver process to appear ...
	I0425 20:08:27.024881   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:27.024922   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:27.024982   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:27.072098   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:27.072129   71966 cri.go:89] found id: ""
	I0425 20:08:27.072140   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:27.072210   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.077726   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:27.077793   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:27.118834   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:27.118855   71966 cri.go:89] found id: ""
	I0425 20:08:27.118864   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:27.118917   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.125277   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:27.125347   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:27.167036   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.167064   71966 cri.go:89] found id: ""
	I0425 20:08:27.167074   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:27.167131   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.172390   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:27.172468   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:27.212933   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:27.212957   71966 cri.go:89] found id: ""
	I0425 20:08:27.212967   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:27.213022   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.218033   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:27.218083   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:27.259294   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:27.259321   71966 cri.go:89] found id: ""
	I0425 20:08:27.259331   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:27.259384   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.265537   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:27.265610   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:27.312145   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:27.312174   71966 cri.go:89] found id: ""
	I0425 20:08:27.312183   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:27.312240   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.318346   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:27.318405   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:27.362467   71966 cri.go:89] found id: ""
	I0425 20:08:27.362495   71966 logs.go:276] 0 containers: []
	W0425 20:08:27.362504   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:27.362509   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:27.362569   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:27.406810   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:27.406834   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.406839   71966 cri.go:89] found id: ""
	I0425 20:08:27.406846   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:27.406903   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.412431   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.421695   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:27.421725   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.472832   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:27.472863   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.535799   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:27.535830   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:28.004964   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:28.005006   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:28.072378   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:28.072417   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:28.236479   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:28.236523   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:28.296095   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:28.296133   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:28.351290   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:28.351314   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:28.400529   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:28.400567   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:28.459149   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:28.459178   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:28.507818   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:28.507844   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:28.565596   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:28.565627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:28.588509   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:28.588535   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:29.403321   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:08:29.403717   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:29.404001   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:28.950127   72220 addons.go:505] duration metric: took 1.448816058s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:08:29.862142   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:30.851653   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.851677   72220 pod_ready.go:81] duration metric: took 3.007171918s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.851689   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857090   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.857108   72220 pod_ready.go:81] duration metric: took 5.412841ms for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857117   72220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863315   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.863331   72220 pod_ready.go:81] duration metric: took 6.207835ms for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863339   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867557   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.867579   72220 pod_ready.go:81] duration metric: took 4.23311ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867590   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872391   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.872407   72220 pod_ready.go:81] duration metric: took 4.810397ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872415   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249226   72220 pod_ready.go:92] pod "kube-proxy-22w7x" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.249259   72220 pod_ready.go:81] duration metric: took 376.837327ms for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249284   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649908   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.649934   72220 pod_ready.go:81] duration metric: took 400.641991ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649945   72220 pod_ready.go:38] duration metric: took 3.817541056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:31.649962   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:31.650025   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:31.684094   72220 api_server.go:72] duration metric: took 4.182865357s to wait for apiserver process to appear ...
	I0425 20:08:31.684123   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:31.684146   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:08:31.689688   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:08:31.690939   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.690963   72220 api_server.go:131] duration metric: took 6.831773ms to wait for apiserver health ...
	I0425 20:08:31.690973   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.853816   72220 system_pods.go:59] 9 kube-system pods found
	I0425 20:08:31.853849   72220 system_pods.go:61] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:31.853856   72220 system_pods.go:61] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:31.853861   72220 system_pods.go:61] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:31.853868   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:31.853872   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:31.853877   72220 system_pods.go:61] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:31.853881   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:31.853889   72220 system_pods.go:61] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:31.853894   72220 system_pods.go:61] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:31.853907   72220 system_pods.go:74] duration metric: took 162.928561ms to wait for pod list to return data ...
	I0425 20:08:31.853916   72220 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:32.049906   72220 default_sa.go:45] found service account: "default"
	I0425 20:08:32.049932   72220 default_sa.go:55] duration metric: took 196.003422ms for default service account to be created ...
	I0425 20:08:32.049942   72220 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:32.255245   72220 system_pods.go:86] 9 kube-system pods found
	I0425 20:08:32.255290   72220 system_pods.go:89] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:32.255298   72220 system_pods.go:89] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:32.255304   72220 system_pods.go:89] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:32.255311   72220 system_pods.go:89] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:32.255317   72220 system_pods.go:89] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:32.255322   72220 system_pods.go:89] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:32.255328   72220 system_pods.go:89] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:32.255338   72220 system_pods.go:89] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:32.255348   72220 system_pods.go:89] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:32.255368   72220 system_pods.go:126] duration metric: took 205.41905ms to wait for k8s-apps to be running ...
	I0425 20:08:32.255378   72220 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:32.255429   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:32.274141   72220 system_svc.go:56] duration metric: took 18.75721ms WaitForService to wait for kubelet
	I0425 20:08:32.274173   72220 kubeadm.go:576] duration metric: took 4.77294686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:32.274198   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:32.449699   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:32.449727   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:32.449741   72220 node_conditions.go:105] duration metric: took 175.536406ms to run NodePressure ...
	I0425 20:08:32.449755   72220 start.go:240] waiting for startup goroutines ...
	I0425 20:08:32.449765   72220 start.go:245] waiting for cluster config update ...
	I0425 20:08:32.449778   72220 start.go:254] writing updated cluster config ...
	I0425 20:08:32.450108   72220 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:32.503317   72220 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:32.505391   72220 out.go:177] * Done! kubectl is now configured to use "no-preload-744552" cluster and "default" namespace by default
	I0425 20:08:31.153636   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:08:31.158526   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:08:31.159775   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.159817   71966 api_server.go:131] duration metric: took 4.134911832s to wait for apiserver health ...
	I0425 20:08:31.159827   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.159847   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:31.159890   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:31.201597   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:31.201616   71966 cri.go:89] found id: ""
	I0425 20:08:31.201625   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:31.201667   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.206973   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:31.207039   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:31.248400   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:31.248424   71966 cri.go:89] found id: ""
	I0425 20:08:31.248435   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:31.248496   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.253822   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:31.253879   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:31.298921   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:31.298946   71966 cri.go:89] found id: ""
	I0425 20:08:31.298956   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:31.299003   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.304691   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:31.304758   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:31.351773   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:31.351796   71966 cri.go:89] found id: ""
	I0425 20:08:31.351804   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:31.351851   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.356599   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:31.356651   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:31.399655   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:31.399678   71966 cri.go:89] found id: ""
	I0425 20:08:31.399686   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:31.399740   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.405103   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:31.405154   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:31.452763   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:31.452785   71966 cri.go:89] found id: ""
	I0425 20:08:31.452794   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:31.452840   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.457788   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:31.457838   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:31.503746   71966 cri.go:89] found id: ""
	I0425 20:08:31.503780   71966 logs.go:276] 0 containers: []
	W0425 20:08:31.503791   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:31.503798   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:31.503868   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:31.548517   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:31.548543   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:31.548555   71966 cri.go:89] found id: ""
	I0425 20:08:31.548565   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:31.548631   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.553673   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.558271   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:31.558290   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:31.974349   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:31.974387   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:32.033292   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:32.033327   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:32.050762   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:32.050791   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:32.101591   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:32.101627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:32.142626   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:32.142652   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:32.203270   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:32.203315   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:32.247021   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:32.247048   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:32.294900   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:32.294936   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:32.353902   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:32.353934   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:32.488543   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:32.488584   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:32.569303   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:32.569358   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:32.622767   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:32.622802   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:35.181779   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:08:35.181813   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.181820   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.181826   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.181832   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.181837   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.181843   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.181851   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.181858   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.181867   71966 system_pods.go:74] duration metric: took 4.022033823s to wait for pod list to return data ...
	I0425 20:08:35.181879   71966 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:35.185387   71966 default_sa.go:45] found service account: "default"
	I0425 20:08:35.185413   71966 default_sa.go:55] duration metric: took 3.523751ms for default service account to be created ...
	I0425 20:08:35.185423   71966 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:35.195075   71966 system_pods.go:86] 8 kube-system pods found
	I0425 20:08:35.195099   71966 system_pods.go:89] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.195104   71966 system_pods.go:89] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.195109   71966 system_pods.go:89] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.195114   71966 system_pods.go:89] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.195118   71966 system_pods.go:89] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.195122   71966 system_pods.go:89] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.195128   71966 system_pods.go:89] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.195133   71966 system_pods.go:89] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.195139   71966 system_pods.go:126] duration metric: took 9.711803ms to wait for k8s-apps to be running ...
	I0425 20:08:35.195155   71966 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:35.195195   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:35.213494   71966 system_svc.go:56] duration metric: took 18.331225ms WaitForService to wait for kubelet
	I0425 20:08:35.213523   71966 kubeadm.go:576] duration metric: took 4m22.589912913s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:35.213545   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:35.216461   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:35.216481   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:35.216493   71966 node_conditions.go:105] duration metric: took 2.94061ms to run NodePressure ...
	I0425 20:08:35.216502   71966 start.go:240] waiting for startup goroutines ...
	I0425 20:08:35.216509   71966 start.go:245] waiting for cluster config update ...
	I0425 20:08:35.216518   71966 start.go:254] writing updated cluster config ...
	I0425 20:08:35.216750   71966 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:35.265836   71966 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:35.269026   71966 out.go:177] * Done! kubectl is now configured to use "embed-certs-512173" cluster and "default" namespace by default
	I0425 20:08:34.404410   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:34.404662   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:44.405293   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:44.405518   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:04.406406   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:04.406676   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.407969   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:44.408240   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.408259   72712 kubeadm.go:309] 
	I0425 20:09:44.408293   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:09:44.408355   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:09:44.408373   72712 kubeadm.go:309] 
	I0425 20:09:44.408417   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:09:44.408448   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:09:44.408562   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:09:44.408575   72712 kubeadm.go:309] 
	I0425 20:09:44.408655   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:09:44.408684   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:09:44.408711   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:09:44.408718   72712 kubeadm.go:309] 
	I0425 20:09:44.408812   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:09:44.408912   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:09:44.408939   72712 kubeadm.go:309] 
	I0425 20:09:44.409085   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:09:44.409217   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:09:44.409341   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:09:44.409418   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:09:44.409433   72712 kubeadm.go:309] 
	I0425 20:09:44.410319   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:09:44.410423   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:09:44.410510   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0425 20:09:44.410640   72712 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0425 20:09:44.410700   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:09:45.395830   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:09:45.412628   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:09:45.423387   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:09:45.423412   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:09:45.423465   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:09:45.434317   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:09:45.434389   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:09:45.445657   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:09:45.455698   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:09:45.455772   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:09:45.466137   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.476140   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:09:45.476192   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.486410   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:09:45.495465   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:09:45.495522   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:09:45.505410   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:09:45.726416   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:11:42.214574   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:11:42.214715   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0425 20:11:42.216323   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:11:42.216393   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:11:42.216507   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:11:42.216650   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:11:42.216795   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:11:42.216882   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:11:42.218766   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:11:42.218847   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:11:42.218923   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:11:42.219042   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:11:42.219103   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:11:42.219167   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:11:42.219237   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:11:42.219321   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:11:42.219407   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:11:42.219519   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:11:42.219639   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:11:42.219694   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:11:42.219742   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:11:42.219786   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:11:42.219831   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:11:42.219883   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:11:42.219929   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:11:42.220029   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:11:42.220139   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:11:42.220204   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:11:42.220308   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:11:42.222891   72712 out.go:204]   - Booting up control plane ...
	I0425 20:11:42.222979   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:11:42.223054   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:11:42.223129   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:11:42.223222   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:11:42.223404   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:11:42.223459   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:11:42.223565   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.223835   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.223937   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224165   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224243   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224457   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224541   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224799   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224902   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.225125   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.225134   72712 kubeadm.go:309] 
	I0425 20:11:42.225166   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:11:42.225204   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:11:42.225210   72712 kubeadm.go:309] 
	I0425 20:11:42.225239   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:11:42.225267   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:11:42.225352   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:11:42.225358   72712 kubeadm.go:309] 
	I0425 20:11:42.225446   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:11:42.225476   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:11:42.225522   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:11:42.225533   72712 kubeadm.go:309] 
	I0425 20:11:42.225626   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:11:42.225714   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:11:42.225729   72712 kubeadm.go:309] 
	I0425 20:11:42.225875   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:11:42.225951   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:11:42.226022   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:11:42.226096   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:11:42.226129   72712 kubeadm.go:309] 
	I0425 20:11:42.226162   72712 kubeadm.go:393] duration metric: took 8m0.122692927s to StartCluster
	I0425 20:11:42.226242   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:11:42.226299   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:11:42.283295   72712 cri.go:89] found id: ""
	I0425 20:11:42.283320   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.283329   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:11:42.283335   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:11:42.283389   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:11:42.322462   72712 cri.go:89] found id: ""
	I0425 20:11:42.322493   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.322505   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:11:42.322512   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:11:42.322574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:11:42.372329   72712 cri.go:89] found id: ""
	I0425 20:11:42.372355   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.372363   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:11:42.372369   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:11:42.372416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:11:42.420348   72712 cri.go:89] found id: ""
	I0425 20:11:42.420374   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.420382   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:11:42.420389   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:11:42.420447   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:11:42.460274   72712 cri.go:89] found id: ""
	I0425 20:11:42.460317   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.460329   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:11:42.460337   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:11:42.460395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:11:42.503828   72712 cri.go:89] found id: ""
	I0425 20:11:42.503855   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.503867   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:11:42.503874   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:11:42.503933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:11:42.545045   72712 cri.go:89] found id: ""
	I0425 20:11:42.545070   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.545086   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:11:42.545095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:11:42.545156   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:11:42.586389   72712 cri.go:89] found id: ""
	I0425 20:11:42.586413   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.586421   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:11:42.586429   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:11:42.586440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:11:42.602835   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:11:42.602863   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:11:42.695131   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:11:42.695153   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:11:42.695168   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:11:42.819889   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:11:42.819922   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:11:42.869446   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:11:42.869474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0425 20:11:42.927184   72712 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0425 20:11:42.927236   72712 out.go:239] * 
	W0425 20:11:42.927291   72712 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.927311   72712 out.go:239] * 
	W0425 20:11:42.928275   72712 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 20:11:42.931353   72712 out.go:177] 
	W0425 20:11:42.932654   72712 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.932696   72712 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0425 20:11:42.932713   72712 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0425 20:11:42.934227   72712 out.go:177] 
	
	
	==> CRI-O <==
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.827420905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714075904827396144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45692680-edeb-4c25-9cff-8413cb329708 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.828393963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6e402c9-f39f-49fc-8232-327ab695c5b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.828471985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6e402c9-f39f-49fc-8232-327ab695c5b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.828508218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b6e402c9-f39f-49fc-8232-327ab695c5b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.867538063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=febc7d85-ba3a-4cfd-9509-8d2a0560bb89 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.867695747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=febc7d85-ba3a-4cfd-9509-8d2a0560bb89 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.869133443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eaf55c27-6d7d-4d42-a916-7ae4d80fbbe3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.869565826Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714075904869539476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eaf55c27-6d7d-4d42-a916-7ae4d80fbbe3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.870182377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f697c56b-eb87-4a6a-8dd2-2a6358b082a3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.870262491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f697c56b-eb87-4a6a-8dd2-2a6358b082a3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.870335787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f697c56b-eb87-4a6a-8dd2-2a6358b082a3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.910977867Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0af3039c-6196-4d11-a9dd-8cc5315edc98 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.911112323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0af3039c-6196-4d11-a9dd-8cc5315edc98 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.913960607Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac7ce484-a2a5-4808-b1b3-f24c74db2e46 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.914379507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714075904914356877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac7ce484-a2a5-4808-b1b3-f24c74db2e46 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.920870239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f94b9e56-78bc-4508-9e7c-aec17d1f68d4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.921004526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f94b9e56-78bc-4508-9e7c-aec17d1f68d4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.921047611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f94b9e56-78bc-4508-9e7c-aec17d1f68d4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.958390377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb34a492-76d7-43a1-9b59-f7959bbfb927 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.958489025Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb34a492-76d7-43a1-9b59-f7959bbfb927 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.959320996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9eec82da-5a75-46cb-b139-2abcb7c49b84 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.959868600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714075904959842969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9eec82da-5a75-46cb-b139-2abcb7c49b84 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.960413803Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba499fb1-32e1-4502-83e7-92901293ce27 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.960491766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba499fb1-32e1-4502-83e7-92901293ce27 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:11:44 old-k8s-version-210442 crio[650]: time="2024-04-25 20:11:44.960553511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ba499fb1-32e1-4502-83e7-92901293ce27 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr25 20:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063840] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050603] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.017598] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.598719] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.716084] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.653602] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.065627] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084851] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.203835] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.167647] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.363402] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.835292] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.069736] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.981211] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[ +11.947575] kauditd_printk_skb: 46 callbacks suppressed
	[Apr25 20:07] systemd-fstab-generator[4988]: Ignoring "noauto" option for root device
	[Apr25 20:09] systemd-fstab-generator[5273]: Ignoring "noauto" option for root device
	[  +0.069773] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:11:45 up 8 min,  0 users,  load average: 0.02, 0.17, 0.12
	Linux old-k8s-version-210442 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000e02a0, 0xc000cd7c20, 0xc000cd7c20, 0x0, 0x0)
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Apr 25 20:11:43 old-k8s-version-210442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0005ae540)
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]: goroutine 166 [runnable]:
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000bcadc0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000bf9aa0, 0x0, 0x0)
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0005ae540)
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Apr 25 20:11:43 old-k8s-version-210442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 25 20:11:43 old-k8s-version-210442 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 25 20:11:43 old-k8s-version-210442 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5528]: I0425 20:11:43.860478    5528 server.go:416] Version: v1.20.0
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5528]: I0425 20:11:43.860973    5528 server.go:837] Client rotation is on, will bootstrap in background
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5528]: I0425 20:11:43.864084    5528 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5528]: I0425 20:11:43.865637    5528 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 25 20:11:43 old-k8s-version-210442 kubelet[5528]: W0425 20:11:43.866213    5528 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210442 -n old-k8s-version-210442
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 2 (252.392712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-210442" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (749.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0425 20:07:57.270447   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 20:08:21.359125   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 20:08:27.582767   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-25 20:16:54.32994736 +0000 UTC m=+6340.738881915
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-142196 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-142196 logs -n 25: (2.264568329s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-120641 sudo cat                             | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo find                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo crio                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-120641                                      | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113000 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:54 UTC |
	|         | disable-driver-mounts-113000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-512173            | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-744552             | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142196  | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210442        | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-512173                 | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-744552                  | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142196       | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:07 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210442             | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:59:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:59:17.353932   72712 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:59:17.354045   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354055   72712 out.go:304] Setting ErrFile to fd 2...
	I0425 19:59:17.354059   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354269   72712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:59:17.354795   72712 out.go:298] Setting JSON to false
	I0425 19:59:17.355681   72712 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6103,"bootTime":1714069054,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:59:17.355740   72712 start.go:139] virtualization: kvm guest
	I0425 19:59:17.357921   72712 out.go:177] * [old-k8s-version-210442] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:59:17.359325   72712 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:59:17.360640   72712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:59:17.359305   72712 notify.go:220] Checking for updates...
	I0425 19:59:17.361801   72712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:59:17.363086   72712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:59:17.364512   72712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:59:17.365842   72712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:59:17.367508   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 19:59:17.367909   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.367946   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.382995   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0425 19:59:17.383362   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.383991   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.384016   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.384378   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.384566   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.386317   72712 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0425 19:59:17.387599   72712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:59:17.387904   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.387948   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.402999   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0425 19:59:17.403506   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.403962   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.403986   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.404318   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.404472   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.438308   72712 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:59:17.439686   72712 start.go:297] selected driver: kvm2
	I0425 19:59:17.439716   72712 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.439831   72712 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:59:17.440486   72712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.440553   72712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:59:17.454719   72712 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:59:17.455114   72712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:59:17.455184   72712 cni.go:84] Creating CNI manager for ""
	I0425 19:59:17.455203   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:59:17.455266   72712 start.go:340] cluster config:
	{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.455393   72712 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.457210   72712 out.go:177] * Starting "old-k8s-version-210442" primary control-plane node in "old-k8s-version-210442" cluster
	I0425 19:59:18.474583   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:17.458384   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 19:59:17.458418   72712 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:59:17.458430   72712 cache.go:56] Caching tarball of preloaded images
	I0425 19:59:17.458517   72712 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:59:17.458529   72712 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0425 19:59:17.458638   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 19:59:17.458844   72712 start.go:360] acquireMachinesLock for old-k8s-version-210442: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:59:24.554517   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:27.626446   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:33.706451   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:36.778527   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:42.858471   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:45.930403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:52.010482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:55.082403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:01.162466   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:04.234537   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:10.314506   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:13.386463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:19.466523   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:22.538461   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:28.622423   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:31.690489   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:37.770534   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:40.842458   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:46.922463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:49.994524   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:56.074478   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:59.146487   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:05.226452   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:08.298480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:14.378455   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:17.450469   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:23.530513   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:26.602470   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:32.682497   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:35.754500   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:41.834480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:44.906482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:50.986468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:54.058502   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:00.138459   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:03.210554   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:09.290491   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:12.362472   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:18.442476   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:21.514468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.599158   72220 start.go:364] duration metric: took 4m21.632012686s to acquireMachinesLock for "no-preload-744552"
	I0425 20:02:30.599206   72220 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:30.599212   72220 fix.go:54] fixHost starting: 
	I0425 20:02:30.599516   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:30.599545   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:30.614130   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0425 20:02:30.614502   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:30.614962   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:02:30.614979   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:30.615306   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:30.615513   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:30.615640   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:02:30.617129   72220 fix.go:112] recreateIfNeeded on no-preload-744552: state=Stopped err=<nil>
	I0425 20:02:30.617150   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	W0425 20:02:30.617300   72220 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:30.619253   72220 out.go:177] * Restarting existing kvm2 VM for "no-preload-744552" ...
	I0425 20:02:27.594454   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.596600   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:30.596654   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.596986   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:02:30.597016   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.597206   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:02:30.599042   71966 machine.go:97] duration metric: took 4m44.620242563s to provisionDockerMachine
	I0425 20:02:30.599079   71966 fix.go:56] duration metric: took 4m44.639860566s for fixHost
	I0425 20:02:30.599085   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 4m44.639890108s
	W0425 20:02:30.599104   71966 start.go:713] error starting host: provision: host is not running
	W0425 20:02:30.599182   71966 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0425 20:02:30.599192   71966 start.go:728] Will try again in 5 seconds ...
	I0425 20:02:30.620801   72220 main.go:141] libmachine: (no-preload-744552) Calling .Start
	I0425 20:02:30.620978   72220 main.go:141] libmachine: (no-preload-744552) Ensuring networks are active...
	I0425 20:02:30.621640   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network default is active
	I0425 20:02:30.621965   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network mk-no-preload-744552 is active
	I0425 20:02:30.622317   72220 main.go:141] libmachine: (no-preload-744552) Getting domain xml...
	I0425 20:02:30.623010   72220 main.go:141] libmachine: (no-preload-744552) Creating domain...
	I0425 20:02:31.809967   72220 main.go:141] libmachine: (no-preload-744552) Waiting to get IP...
	I0425 20:02:31.810856   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:31.811353   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:31.811403   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:31.811308   73381 retry.go:31] will retry after 294.641704ms: waiting for machine to come up
	I0425 20:02:32.107955   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.108508   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.108542   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.108449   73381 retry.go:31] will retry after 373.307428ms: waiting for machine to come up
	I0425 20:02:32.483111   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.483590   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.483619   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.483546   73381 retry.go:31] will retry after 484.455862ms: waiting for machine to come up
	I0425 20:02:32.969188   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.969657   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.969694   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.969602   73381 retry.go:31] will retry after 382.359725ms: waiting for machine to come up
	I0425 20:02:33.353143   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.353598   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.353621   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.353550   73381 retry.go:31] will retry after 515.389674ms: waiting for machine to come up
	I0425 20:02:35.602273   71966 start.go:360] acquireMachinesLock for embed-certs-512173: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 20:02:33.870172   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.870652   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.870676   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.870603   73381 retry.go:31] will retry after 714.032032ms: waiting for machine to come up
	I0425 20:02:34.586478   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:34.586833   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:34.586861   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:34.586791   73381 retry.go:31] will retry after 1.005122465s: waiting for machine to come up
	I0425 20:02:35.593962   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:35.594367   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:35.594400   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:35.594310   73381 retry.go:31] will retry after 1.483740326s: waiting for machine to come up
	I0425 20:02:37.079306   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:37.079751   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:37.079784   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:37.079700   73381 retry.go:31] will retry after 1.828802911s: waiting for machine to come up
	I0425 20:02:38.910631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:38.911138   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:38.911163   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:38.911086   73381 retry.go:31] will retry after 1.528405609s: waiting for machine to come up
	I0425 20:02:40.441741   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:40.442251   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:40.442277   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:40.442200   73381 retry.go:31] will retry after 2.817901976s: waiting for machine to come up
	I0425 20:02:43.263903   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:43.264376   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:43.264408   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:43.264324   73381 retry.go:31] will retry after 2.258888981s: waiting for machine to come up
	I0425 20:02:45.525701   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:45.526139   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:45.526168   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:45.526106   73381 retry.go:31] will retry after 4.008258204s: waiting for machine to come up
	I0425 20:02:50.951421   72304 start.go:364] duration metric: took 4m34.5614094s to acquireMachinesLock for "default-k8s-diff-port-142196"
	I0425 20:02:50.951491   72304 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:50.951500   72304 fix.go:54] fixHost starting: 
	I0425 20:02:50.951906   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:50.951944   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:50.968074   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33481
	I0425 20:02:50.968452   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:50.968862   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:02:50.968886   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:50.969238   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:50.969460   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:02:50.969622   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:02:50.971100   72304 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142196: state=Stopped err=<nil>
	I0425 20:02:50.971125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	W0425 20:02:50.971271   72304 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:50.974623   72304 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142196" ...
	I0425 20:02:50.975991   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Start
	I0425 20:02:50.976154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring networks are active...
	I0425 20:02:50.976794   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network default is active
	I0425 20:02:50.977111   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network mk-default-k8s-diff-port-142196 is active
	I0425 20:02:50.977490   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Getting domain xml...
	I0425 20:02:50.978200   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Creating domain...
	I0425 20:02:49.538522   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.538999   72220 main.go:141] libmachine: (no-preload-744552) Found IP for machine: 192.168.72.142
	I0425 20:02:49.539033   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has current primary IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.539043   72220 main.go:141] libmachine: (no-preload-744552) Reserving static IP address...
	I0425 20:02:49.539420   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.539458   72220 main.go:141] libmachine: (no-preload-744552) DBG | skip adding static IP to network mk-no-preload-744552 - found existing host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"}
	I0425 20:02:49.539469   72220 main.go:141] libmachine: (no-preload-744552) Reserved static IP address: 192.168.72.142
	I0425 20:02:49.539483   72220 main.go:141] libmachine: (no-preload-744552) Waiting for SSH to be available...
	I0425 20:02:49.539490   72220 main.go:141] libmachine: (no-preload-744552) DBG | Getting to WaitForSSH function...
	I0425 20:02:49.541631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542042   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.542073   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542221   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH client type: external
	I0425 20:02:49.542270   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa (-rw-------)
	I0425 20:02:49.542300   72220 main.go:141] libmachine: (no-preload-744552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:02:49.542316   72220 main.go:141] libmachine: (no-preload-744552) DBG | About to run SSH command:
	I0425 20:02:49.542334   72220 main.go:141] libmachine: (no-preload-744552) DBG | exit 0
	I0425 20:02:49.670034   72220 main.go:141] libmachine: (no-preload-744552) DBG | SSH cmd err, output: <nil>: 
	I0425 20:02:49.670414   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetConfigRaw
	I0425 20:02:49.671039   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:49.673279   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673592   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.673629   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673878   72220 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/config.json ...
	I0425 20:02:49.674066   72220 machine.go:94] provisionDockerMachine start ...
	I0425 20:02:49.674083   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:49.674317   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.676767   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677084   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.677115   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677238   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.677413   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677562   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677698   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.677841   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.678037   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.678049   72220 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:02:49.790734   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:02:49.790764   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791028   72220 buildroot.go:166] provisioning hostname "no-preload-744552"
	I0425 20:02:49.791061   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791248   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.793907   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794279   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.794313   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794450   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.794649   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794787   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794908   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.795054   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.795256   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.795277   72220 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-744552 && echo "no-preload-744552" | sudo tee /etc/hostname
	I0425 20:02:49.925459   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744552
	
	I0425 20:02:49.925483   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.928282   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928646   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.928680   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928831   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.929012   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929194   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929327   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.929481   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.929679   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.929709   72220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-744552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-744552/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-744552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:02:50.052805   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:50.052841   72220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:02:50.052861   72220 buildroot.go:174] setting up certificates
	I0425 20:02:50.052875   72220 provision.go:84] configureAuth start
	I0425 20:02:50.052887   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:50.053193   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.055800   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056145   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.056168   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056339   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.058090   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058395   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.058429   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058526   72220 provision.go:143] copyHostCerts
	I0425 20:02:50.058577   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:02:50.058587   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:02:50.058647   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:02:50.058742   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:02:50.058750   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:02:50.058774   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:02:50.058827   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:02:50.058834   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:02:50.058855   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:02:50.058904   72220 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.no-preload-744552 san=[127.0.0.1 192.168.72.142 localhost minikube no-preload-744552]
	I0425 20:02:50.247711   72220 provision.go:177] copyRemoteCerts
	I0425 20:02:50.247768   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:02:50.247792   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.250146   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250560   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.250600   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250780   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.250978   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.251128   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.251272   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.338105   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:02:50.365554   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 20:02:50.391433   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:02:50.416606   72220 provision.go:87] duration metric: took 363.720332ms to configureAuth
	I0425 20:02:50.416627   72220 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:02:50.416795   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:02:50.416876   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.419385   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419731   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.419764   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419903   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.420079   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420322   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420557   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.420724   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.420909   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.420929   72220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:02:50.702065   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:02:50.702104   72220 machine.go:97] duration metric: took 1.028026584s to provisionDockerMachine
	I0425 20:02:50.702117   72220 start.go:293] postStartSetup for "no-preload-744552" (driver="kvm2")
	I0425 20:02:50.702131   72220 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:02:50.702165   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.702531   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:02:50.702572   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.705595   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.705948   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.705992   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.706173   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.706367   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.706588   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.706759   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.794791   72220 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:02:50.799592   72220 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:02:50.799621   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:02:50.799701   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:02:50.799799   72220 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:02:50.799913   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:02:50.810796   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:02:50.836919   72220 start.go:296] duration metric: took 134.787005ms for postStartSetup
	I0425 20:02:50.836972   72220 fix.go:56] duration metric: took 20.237758066s for fixHost
	I0425 20:02:50.836995   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.839818   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840295   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.840325   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840429   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.840600   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840752   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840929   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.841079   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.841307   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.841338   72220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:02:50.951251   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075370.921171901
	
	I0425 20:02:50.951272   72220 fix.go:216] guest clock: 1714075370.921171901
	I0425 20:02:50.951279   72220 fix.go:229] Guest: 2024-04-25 20:02:50.921171901 +0000 UTC Remote: 2024-04-25 20:02:50.836976462 +0000 UTC m=+282.018789867 (delta=84.195439ms)
	I0425 20:02:50.951312   72220 fix.go:200] guest clock delta is within tolerance: 84.195439ms
	I0425 20:02:50.951321   72220 start.go:83] releasing machines lock for "no-preload-744552", held for 20.352126868s
	I0425 20:02:50.951348   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.951612   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.954231   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954614   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.954638   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954821   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955240   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955419   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955492   72220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:02:50.955540   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.955659   72220 ssh_runner.go:195] Run: cat /version.json
	I0425 20:02:50.955688   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.958155   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958476   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958517   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958541   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958661   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.958808   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.958903   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958932   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.958935   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.959045   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.959181   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.959192   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.959360   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.959471   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:51.066809   72220 ssh_runner.go:195] Run: systemctl --version
	I0425 20:02:51.073198   72220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:02:51.228547   72220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:02:51.236443   72220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:02:51.236518   72220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:02:51.256226   72220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:02:51.256244   72220 start.go:494] detecting cgroup driver to use...
	I0425 20:02:51.256307   72220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:02:51.278596   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:02:51.295692   72220 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:02:51.295751   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:02:51.310940   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:02:51.326072   72220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:02:51.459064   72220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:02:51.614563   72220 docker.go:233] disabling docker service ...
	I0425 20:02:51.614639   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:02:51.638817   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:02:51.658265   72220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:02:51.818412   72220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:02:51.943830   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:02:51.960672   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:02:51.982028   72220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:02:51.982090   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:51.994990   72220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:02:51.995079   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.007907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.020225   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.033306   72220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:02:52.046241   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.058282   72220 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.078907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.090258   72220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:02:52.100796   72220 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:02:52.100873   72220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:02:52.115600   72220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:02:52.125458   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:02:52.288142   72220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:02:52.430252   72220 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:02:52.430353   72220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:02:52.436493   72220 start.go:562] Will wait 60s for crictl version
	I0425 20:02:52.436565   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.441427   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:02:52.479709   72220 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:02:52.479810   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.512180   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.545115   72220 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:02:52.546476   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:52.549314   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549723   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:52.549759   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549926   72220 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0425 20:02:52.554924   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:02:52.568804   72220 kubeadm.go:877] updating cluster {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:02:52.568958   72220 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:02:52.568997   72220 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:02:52.609095   72220 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:02:52.609117   72220 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:02:52.609156   72220 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.609188   72220 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.609185   72220 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.609214   72220 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.609227   72220 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.609256   72220 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.609334   72220 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.609370   72220 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610726   72220 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.610747   72220 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610772   72220 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.610724   72220 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.610800   72220 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.610807   72220 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.611075   72220 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.611096   72220 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.753069   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0425 20:02:52.771762   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.825052   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908030   72220 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0425 20:02:52.908082   72220 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.908113   72220 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0425 20:02:52.908127   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.908135   72220 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908164   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.915126   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.915132   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.967834   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.969385   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.973718   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0425 20:02:52.973787   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0425 20:02:52.973823   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:52.973870   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:52.985763   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.986695   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.068153   72220 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0425 20:02:53.068196   72220 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.068269   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099237   72220 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0425 20:02:53.099257   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0425 20:02:53.099274   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099290   72220 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:53.099294   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0425 20:02:53.099330   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099368   72220 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0425 20:02:53.099401   72220 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:53.099433   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099333   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.115478   72220 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0425 20:02:53.115523   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.115526   72220 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.115610   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.550328   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.240552   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting to get IP...
	I0425 20:02:52.241327   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241657   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241757   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.241648   73527 retry.go:31] will retry after 195.006273ms: waiting for machine to come up
	I0425 20:02:52.438154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438702   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438726   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.438657   73527 retry.go:31] will retry after 365.911905ms: waiting for machine to come up
	I0425 20:02:52.806281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806793   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806826   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.806727   73527 retry.go:31] will retry after 448.572137ms: waiting for machine to come up
	I0425 20:02:53.257396   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257935   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257966   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.257889   73527 retry.go:31] will retry after 560.886917ms: waiting for machine to come up
	I0425 20:02:53.820527   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820954   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820979   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.820915   73527 retry.go:31] will retry after 514.294303ms: waiting for machine to come up
	I0425 20:02:54.336706   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337129   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:54.337101   73527 retry.go:31] will retry after 853.040726ms: waiting for machine to come up
	I0425 20:02:55.192349   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192857   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:55.192774   73527 retry.go:31] will retry after 1.17554782s: waiting for machine to come up
	I0425 20:02:56.232794   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.133436829s)
	I0425 20:02:56.232845   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0425 20:02:56.232854   72220 ssh_runner.go:235] Completed: which crictl: (3.133373607s)
	I0425 20:02:56.232875   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232915   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232961   72220 ssh_runner.go:235] Completed: which crictl: (3.133515676s)
	I0425 20:02:56.232919   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:56.233011   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:56.233050   72220 ssh_runner.go:235] Completed: which crictl: (3.11742497s)
	I0425 20:02:56.233089   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:56.233126   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (3.117580594s)
	I0425 20:02:56.233160   72220 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.6828061s)
	I0425 20:02:56.233167   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0425 20:02:56.233207   72220 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0425 20:02:56.233242   72220 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:56.233248   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:56.233284   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:56.323764   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0425 20:02:56.323884   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:02:56.323906   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0425 20:02:56.323989   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:02:58.553707   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.320762887s)
	I0425 20:02:58.553742   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0425 20:02:58.553768   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.320739179s)
	I0425 20:02:58.553784   72220 ssh_runner.go:235] Completed: which crictl: (2.320487912s)
	I0425 20:02:58.553807   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0425 20:02:58.553838   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:58.553864   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.320587538s)
	I0425 20:02:58.553889   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:02:58.553909   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0425 20:02:58.553948   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.229944417s)
	I0425 20:02:58.553959   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553989   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0425 20:02:58.554009   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553910   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.23000183s)
	I0425 20:02:58.554069   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0425 20:02:58.602692   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0425 20:02:58.602694   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0425 20:02:58.602819   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:02:56.369693   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370169   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:56.370115   73527 retry.go:31] will retry after 1.260629487s: waiting for machine to come up
	I0425 20:02:57.632705   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633187   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:57.633150   73527 retry.go:31] will retry after 1.291948113s: waiting for machine to come up
	I0425 20:02:58.926675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927167   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927196   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:58.927111   73527 retry.go:31] will retry after 1.869565597s: waiting for machine to come up
	I0425 20:03:00.799357   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799820   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799850   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:00.799750   73527 retry.go:31] will retry after 2.157801293s: waiting for machine to come up
	I0425 20:03:00.027830   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.473790165s)
	I0425 20:03:00.027869   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0425 20:03:00.027895   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027943   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027842   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.424998268s)
	I0425 20:03:00.027985   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0425 20:03:02.204218   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.176247608s)
	I0425 20:03:02.204254   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0425 20:03:02.204290   72220 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.204335   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.959407   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959789   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959812   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:02.959745   73527 retry.go:31] will retry after 2.617480271s: waiting for machine to come up
	I0425 20:03:05.579300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:05.579775   73527 retry.go:31] will retry after 4.058370199s: waiting for machine to come up
	I0425 20:03:06.132743   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.928385447s)
	I0425 20:03:06.132779   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0425 20:03:06.132805   72220 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:06.132857   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:08.314803   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.181910584s)
	I0425 20:03:08.314842   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0425 20:03:08.314881   72220 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:08.314930   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:11.255486   72712 start.go:364] duration metric: took 3m53.796595105s to acquireMachinesLock for "old-k8s-version-210442"
	I0425 20:03:11.255550   72712 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:11.255569   72712 fix.go:54] fixHost starting: 
	I0425 20:03:11.256083   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:11.256128   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:11.272950   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0425 20:03:11.273365   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:11.273878   72712 main.go:141] libmachine: Using API Version  1
	I0425 20:03:11.273907   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:11.274277   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:11.274487   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:11.274666   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetState
	I0425 20:03:11.276420   72712 fix.go:112] recreateIfNeeded on old-k8s-version-210442: state=Stopped err=<nil>
	I0425 20:03:11.276454   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	W0425 20:03:11.276608   72712 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:11.279156   72712 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210442" ...
	I0425 20:03:09.639300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639833   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Found IP for machine: 192.168.39.123
	I0425 20:03:09.639867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has current primary IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserving static IP address...
	I0425 20:03:09.640257   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.640281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | skip adding static IP to network mk-default-k8s-diff-port-142196 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"}
	I0425 20:03:09.640300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserved static IP address: 192.168.39.123
	I0425 20:03:09.640313   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for SSH to be available...
	I0425 20:03:09.640321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Getting to WaitForSSH function...
	I0425 20:03:09.643058   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643371   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.643400   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643506   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH client type: external
	I0425 20:03:09.643557   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa (-rw-------)
	I0425 20:03:09.643586   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:09.643609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | About to run SSH command:
	I0425 20:03:09.643618   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | exit 0
	I0425 20:03:09.766707   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:09.767091   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetConfigRaw
	I0425 20:03:09.767818   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:09.770573   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771012   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.771047   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771296   72304 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/config.json ...
	I0425 20:03:09.771580   72304 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:09.771609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:09.771884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.774255   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.774699   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774866   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.775044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775213   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775362   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.775520   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.775781   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.775797   72304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:09.884259   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:09.884288   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884519   72304 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142196"
	I0425 20:03:09.884547   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884747   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.887391   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.887798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.887829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.888003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.888215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888542   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.888703   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.888918   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.888934   72304 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142196 && echo "default-k8s-diff-port-142196" | sudo tee /etc/hostname
	I0425 20:03:10.015919   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142196
	
	I0425 20:03:10.015951   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.018640   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.018955   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.018987   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.019201   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.019398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019729   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.019906   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.020098   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.020120   72304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142196' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142196/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142196' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:10.145789   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:10.145822   72304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:10.145873   72304 buildroot.go:174] setting up certificates
	I0425 20:03:10.145886   72304 provision.go:84] configureAuth start
	I0425 20:03:10.145899   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:10.146185   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:10.148943   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149309   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.149342   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149492   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.152000   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152418   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.152445   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152621   72304 provision.go:143] copyHostCerts
	I0425 20:03:10.152681   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:10.152693   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:10.152758   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:10.152890   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:10.152905   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:10.152940   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:10.153033   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:10.153044   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:10.153072   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:10.153145   72304 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142196 san=[127.0.0.1 192.168.39.123 default-k8s-diff-port-142196 localhost minikube]
	I0425 20:03:10.572412   72304 provision.go:177] copyRemoteCerts
	I0425 20:03:10.572473   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:10.572496   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.575083   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.575421   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.575696   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.575799   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.575916   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:10.657850   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:10.685493   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0425 20:03:10.713230   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:10.740577   72304 provision.go:87] duration metric: took 594.674196ms to configureAuth
	I0425 20:03:10.740604   72304 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:10.740835   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:10.740916   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.743709   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744039   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.744071   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744236   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.744434   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744621   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744723   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.744901   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.745065   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.745083   72304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:11.017816   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:11.017844   72304 machine.go:97] duration metric: took 1.24624593s to provisionDockerMachine
	I0425 20:03:11.017858   72304 start.go:293] postStartSetup for "default-k8s-diff-port-142196" (driver="kvm2")
	I0425 20:03:11.017871   72304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:11.017892   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.018195   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:11.018231   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.020759   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021067   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.021092   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.021403   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.021600   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.021729   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.106290   72304 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:11.111532   72304 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:11.111560   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:11.111645   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:11.111744   72304 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:11.111856   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:11.122216   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:11.150472   72304 start.go:296] duration metric: took 132.600197ms for postStartSetup
	I0425 20:03:11.150520   72304 fix.go:56] duration metric: took 20.199020729s for fixHost
	I0425 20:03:11.150544   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.153466   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.153798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.153824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.154055   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.154289   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154483   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154635   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.154824   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:11.154991   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:11.155001   72304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:11.255330   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075391.221756501
	
	I0425 20:03:11.255357   72304 fix.go:216] guest clock: 1714075391.221756501
	I0425 20:03:11.255365   72304 fix.go:229] Guest: 2024-04-25 20:03:11.221756501 +0000 UTC Remote: 2024-04-25 20:03:11.15052524 +0000 UTC m=+294.908822896 (delta=71.231261ms)
	I0425 20:03:11.255384   72304 fix.go:200] guest clock delta is within tolerance: 71.231261ms
	I0425 20:03:11.255388   72304 start.go:83] releasing machines lock for "default-k8s-diff-port-142196", held for 20.303917474s
	I0425 20:03:11.255419   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.255700   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:11.258740   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259076   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.259104   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259414   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.259906   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260102   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260197   72304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:11.260241   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.260350   72304 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:11.260374   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.262843   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263001   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263216   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263245   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263365   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263480   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263669   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263679   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263864   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264026   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264039   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.264203   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.280701   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .Start
	I0425 20:03:11.280895   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring networks are active...
	I0425 20:03:11.281729   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network default is active
	I0425 20:03:11.282158   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network mk-old-k8s-version-210442 is active
	I0425 20:03:11.282639   72712 main.go:141] libmachine: (old-k8s-version-210442) Getting domain xml...
	I0425 20:03:11.283399   72712 main.go:141] libmachine: (old-k8s-version-210442) Creating domain...
	I0425 20:03:11.339564   72304 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:11.364667   72304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:11.526308   72304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:11.533487   72304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:11.533563   72304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:11.552090   72304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:11.552120   72304 start.go:494] detecting cgroup driver to use...
	I0425 20:03:11.552196   72304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:11.569573   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:11.584425   72304 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:11.584489   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:11.599083   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:11.613739   72304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:11.739574   72304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:11.911318   72304 docker.go:233] disabling docker service ...
	I0425 20:03:11.911390   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:11.928743   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:11.946101   72304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:12.112740   72304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:12.246863   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:12.269551   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:12.298838   72304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:12.298907   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.312059   72304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:12.312113   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.324076   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.336239   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.350088   72304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:12.368362   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.385406   72304 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.407195   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.420065   72304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:12.431195   72304 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:12.431260   72304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:12.446263   72304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:12.457137   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:12.622756   72304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:12.799932   72304 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:12.800012   72304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:12.807795   72304 start.go:562] Will wait 60s for crictl version
	I0425 20:03:12.807862   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:03:12.813860   72304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:12.861249   72304 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:12.861327   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.896140   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.942768   72304 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:09.079550   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0425 20:03:09.079607   72220 cache_images.go:123] Successfully loaded all cached images
	I0425 20:03:09.079615   72220 cache_images.go:92] duration metric: took 16.470485982s to LoadCachedImages
	I0425 20:03:09.079629   72220 kubeadm.go:928] updating node { 192.168.72.142 8443 v1.30.0 crio true true} ...
	I0425 20:03:09.079764   72220 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-744552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:09.079839   72220 ssh_runner.go:195] Run: crio config
	I0425 20:03:09.139170   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:09.139194   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:09.139206   72220 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:09.139225   72220 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.142 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-744552 NodeName:no-preload-744552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:09.139365   72220 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-744552"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:09.139426   72220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:09.151828   72220 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:09.151884   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:09.163310   72220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0425 20:03:09.183132   72220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:09.203038   72220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0425 20:03:09.223717   72220 ssh_runner.go:195] Run: grep 192.168.72.142	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:09.228467   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:09.243976   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:09.361475   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:09.380862   72220 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552 for IP: 192.168.72.142
	I0425 20:03:09.380886   72220 certs.go:194] generating shared ca certs ...
	I0425 20:03:09.380901   72220 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:09.381076   72220 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:09.381132   72220 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:09.381147   72220 certs.go:256] generating profile certs ...
	I0425 20:03:09.381254   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/client.key
	I0425 20:03:09.381337   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key.a705cb96
	I0425 20:03:09.381392   72220 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key
	I0425 20:03:09.381538   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:09.381586   72220 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:09.381601   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:09.381638   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:09.381668   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:09.381702   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:09.381761   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:09.382459   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:09.423895   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:09.462481   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:09.491394   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:09.532779   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 20:03:09.569107   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 20:03:09.597381   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:09.623962   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:09.651141   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:09.677295   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:09.702404   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:09.729275   72220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:09.748421   72220 ssh_runner.go:195] Run: openssl version
	I0425 20:03:09.754848   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:09.768121   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774468   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774529   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.783568   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:09.799120   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:09.812983   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818660   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818740   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.826091   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:09.840115   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:09.853372   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858387   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858455   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.864693   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:09.876755   72220 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:09.882829   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:09.890219   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:09.897091   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:09.906017   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:09.913154   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:09.919989   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:09.926552   72220 kubeadm.go:391] StartCluster: {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:09.926671   72220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:09.926734   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:09.971983   72220 cri.go:89] found id: ""
	I0425 20:03:09.972071   72220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:09.983371   72220 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:09.983399   72220 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:09.983406   72220 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:09.983451   72220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:09.994047   72220 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:09.995080   72220 kubeconfig.go:125] found "no-preload-744552" server: "https://192.168.72.142:8443"
	I0425 20:03:09.997202   72220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:10.007666   72220 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.142
	I0425 20:03:10.007703   72220 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:10.007713   72220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:10.007752   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:10.049581   72220 cri.go:89] found id: ""
	I0425 20:03:10.049679   72220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:10.071032   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:10.083240   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:10.083267   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:10.083314   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:10.093444   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:10.093507   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:10.104291   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:10.114596   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:10.114659   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:10.125118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.138299   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:10.138362   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.152185   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:10.163493   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:10.163555   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:10.177214   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:10.188286   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:10.312536   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.497483   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.184911769s)
	I0425 20:03:11.497531   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.753732   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.871246   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.968366   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:11.968445   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.468885   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.968598   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:13.037502   72220 api_server.go:72] duration metric: took 1.069135698s to wait for apiserver process to appear ...
	I0425 20:03:13.037542   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:13.037568   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:13.038540   72220 api_server.go:269] stopped: https://192.168.72.142:8443/healthz: Get "https://192.168.72.142:8443/healthz": dial tcp 192.168.72.142:8443: connect: connection refused
	I0425 20:03:13.537713   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:12.944206   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:12.947412   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.947822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:12.947852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.948086   72304 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:12.953504   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:12.969171   72304 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:12.969344   72304 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:12.969402   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:13.016509   72304 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:13.016585   72304 ssh_runner.go:195] Run: which lz4
	I0425 20:03:13.022023   72304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:13.027861   72304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:13.027896   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:14.913405   72304 crio.go:462] duration metric: took 1.891428846s to copy over tarball
	I0425 20:03:14.913466   72304 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:12.659136   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting to get IP...
	I0425 20:03:12.660227   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.660770   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.660843   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.660724   73691 retry.go:31] will retry after 234.96602ms: waiting for machine to come up
	I0425 20:03:12.897395   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.897966   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.897993   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.897913   73691 retry.go:31] will retry after 387.692223ms: waiting for machine to come up
	I0425 20:03:13.287742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.288414   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.288443   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.288397   73691 retry.go:31] will retry after 461.897892ms: waiting for machine to come up
	I0425 20:03:13.752061   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.752574   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.752603   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.752513   73691 retry.go:31] will retry after 452.347315ms: waiting for machine to come up
	I0425 20:03:14.206275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.206684   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.206708   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.206629   73691 retry.go:31] will retry after 466.12355ms: waiting for machine to come up
	I0425 20:03:14.674265   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.674788   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.674818   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.674735   73691 retry.go:31] will retry after 697.70071ms: waiting for machine to come up
	I0425 20:03:15.373862   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:15.374297   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:15.374325   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:15.374252   73691 retry.go:31] will retry after 835.73273ms: waiting for machine to come up
	I0425 20:03:16.211394   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:16.211870   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:16.211902   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:16.211815   73691 retry.go:31] will retry after 1.26739043s: waiting for machine to come up
	I0425 20:03:16.441793   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.441829   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.441848   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.506023   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.506057   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.538293   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.544891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:16.544925   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.038519   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.049842   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.049883   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.538420   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.545891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.545929   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:18.038192   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:18.042957   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:03:18.063131   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:18.063171   72220 api_server.go:131] duration metric: took 5.025619242s to wait for apiserver health ...
	I0425 20:03:18.063182   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:18.063192   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:18.405047   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:18.552639   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:18.565507   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:18.591534   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:17.662135   72304 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.748640149s)
	I0425 20:03:17.662171   72304 crio.go:469] duration metric: took 2.748741671s to extract the tarball
	I0425 20:03:17.662184   72304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:17.706288   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:17.773537   72304 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:03:17.773565   72304 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:03:17.773575   72304 kubeadm.go:928] updating node { 192.168.39.123 8444 v1.30.0 crio true true} ...
	I0425 20:03:17.773709   72304 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142196 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:17.773799   72304 ssh_runner.go:195] Run: crio config
	I0425 20:03:17.836354   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:17.836379   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:17.836391   72304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:17.836411   72304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142196 NodeName:default-k8s-diff-port-142196 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:17.836545   72304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142196"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:17.836599   72304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:17.848441   72304 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:17.848506   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:17.860320   72304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0425 20:03:17.885528   72304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:17.905701   72304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0425 20:03:17.925064   72304 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:17.930085   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:17.944507   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:18.108208   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:18.134428   72304 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196 for IP: 192.168.39.123
	I0425 20:03:18.134456   72304 certs.go:194] generating shared ca certs ...
	I0425 20:03:18.134479   72304 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:18.134672   72304 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:18.134745   72304 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:18.134761   72304 certs.go:256] generating profile certs ...
	I0425 20:03:18.134870   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/client.key
	I0425 20:03:18.245553   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key.1fb61bcb
	I0425 20:03:18.245666   72304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key
	I0425 20:03:18.245833   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:18.245880   72304 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:18.245894   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:18.245934   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:18.245964   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:18.245997   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:18.246058   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:18.246994   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:18.293000   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:18.322296   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:18.358060   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:18.390999   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0425 20:03:18.420333   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:18.450050   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:18.477983   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:18.506030   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:18.538394   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:18.574361   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:18.610827   72304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:18.634141   72304 ssh_runner.go:195] Run: openssl version
	I0425 20:03:18.640647   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:18.653988   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659400   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659458   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.665868   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:18.679247   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:18.692272   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697356   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697410   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.703694   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:18.716412   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:18.733362   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739598   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739651   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.748175   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:18.764492   72304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:18.770594   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:18.777414   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:18.784614   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:18.793453   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:18.800721   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:18.807982   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:18.814836   72304 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:18.814942   72304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:18.814992   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.864771   72304 cri.go:89] found id: ""
	I0425 20:03:18.864834   72304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:18.878200   72304 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:18.878238   72304 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:18.878245   72304 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:18.878305   72304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:18.892071   72304 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:18.892973   72304 kubeconfig.go:125] found "default-k8s-diff-port-142196" server: "https://192.168.39.123:8444"
	I0425 20:03:18.894860   72304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:18.907959   72304 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.123
	I0425 20:03:18.907989   72304 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:18.907998   72304 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:18.908045   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.951245   72304 cri.go:89] found id: ""
	I0425 20:03:18.951311   72304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:18.980033   72304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:18.995453   72304 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:18.995473   72304 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:18.995524   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0425 20:03:19.007409   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:19.007470   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:19.019782   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0425 20:03:19.031410   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:19.031493   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:19.043439   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.055936   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:19.055999   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.067986   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0425 20:03:19.080785   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:19.080869   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:19.092802   72304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:19.105024   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.240077   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.259510   72304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.019382485s)
	I0425 20:03:20.259544   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.489833   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.599319   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.784451   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:20.784606   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.284759   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:17.480654   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:17.481045   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:17.481094   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:17.481007   73691 retry.go:31] will retry after 1.238487953s: waiting for machine to come up
	I0425 20:03:18.720512   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:18.720940   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:18.720965   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:18.720902   73691 retry.go:31] will retry after 2.277078909s: waiting for machine to come up
	I0425 20:03:20.999749   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:21.000275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:21.000305   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:21.000223   73691 retry.go:31] will retry after 2.81059851s: waiting for machine to come up
	I0425 20:03:18.940880   72220 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:18.983894   72220 system_pods.go:61] "coredns-7db6d8ff4d-67sp6" [0fc3ee18-e3fe-4f4a-a5bd-4d6e3497bfa3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:18.983953   72220 system_pods.go:61] "etcd-no-preload-744552" [f3768d08-4cc6-42aa-9d1c-b0fd5d6ffed5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:18.983975   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [9d927e1f-4ddb-4b54-b1f1-f5248cb51745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:18.983984   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [cc71ce6c-22ba-4189-99dc-dd2da6506d37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:18.983993   72220 system_pods.go:61] "kube-proxy-whkbk" [a22b51a9-4854-41f5-bb5a-a81920a09b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0425 20:03:18.984026   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [5f01cd76-d6b7-4033-9aa9-38cac91965d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:18.984037   72220 system_pods.go:61] "metrics-server-569cc877fc-6n2gd" [03283a78-d44f-4f60-9743-680c18aeace3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:18.984052   72220 system_pods.go:61] "storage-provisioner" [4211811e-85ce-4da2-bc16-16909c26ced7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0425 20:03:18.984064   72220 system_pods.go:74] duration metric: took 392.509163ms to wait for pod list to return data ...
	I0425 20:03:18.984077   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:18.989373   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:18.989405   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:18.989424   72220 node_conditions.go:105] duration metric: took 5.341625ms to run NodePressure ...
	I0425 20:03:18.989446   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.809313   72220 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818730   72220 kubeadm.go:733] kubelet initialised
	I0425 20:03:19.818753   72220 kubeadm.go:734] duration metric: took 9.41696ms waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818761   72220 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:19.825762   72220 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:21.834658   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:21.785434   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.855046   72304 api_server.go:72] duration metric: took 1.070594042s to wait for apiserver process to appear ...
	I0425 20:03:21.855127   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:21.855156   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:21.855709   72304 api_server.go:269] stopped: https://192.168.39.123:8444/healthz: Get "https://192.168.39.123:8444/healthz": dial tcp 192.168.39.123:8444: connect: connection refused
	I0425 20:03:22.355555   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.430068   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.430099   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.430115   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.487089   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.487124   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.855301   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.861270   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:24.861299   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.356007   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.360802   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.360839   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.855336   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.861719   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.861753   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:23.812963   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:23.813457   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:23.813476   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:23.813429   73691 retry.go:31] will retry after 2.508562986s: waiting for machine to come up
	I0425 20:03:26.323267   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:26.323733   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:26.323761   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:26.323699   73691 retry.go:31] will retry after 4.475703543s: waiting for machine to come up
	I0425 20:03:26.355254   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.360977   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.361011   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:26.855547   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.860178   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.860203   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.355819   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.360466   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:27.360491   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.856219   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.861706   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:03:27.868486   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:27.868525   72304 api_server.go:131] duration metric: took 6.013385579s to wait for apiserver health ...
	I0425 20:03:27.868536   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:27.868544   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:27.870534   72304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:24.335382   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:24.335415   72220 pod_ready.go:81] duration metric: took 4.509621487s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:24.335427   72220 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:26.342530   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:28.841444   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:27.871863   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:27.885767   72304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:27.910270   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:27.922984   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:27.923016   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:27.923024   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:27.923030   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:27.923036   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:27.923041   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:03:27.923052   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:27.923057   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:27.923061   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:03:27.923067   72304 system_pods.go:74] duration metric: took 12.774358ms to wait for pod list to return data ...
	I0425 20:03:27.923073   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:27.927553   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:27.927582   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:27.927596   72304 node_conditions.go:105] duration metric: took 4.517775ms to run NodePressure ...
	I0425 20:03:27.927616   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:28.213013   72304 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217836   72304 kubeadm.go:733] kubelet initialised
	I0425 20:03:28.217860   72304 kubeadm.go:734] duration metric: took 4.809ms waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217869   72304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:28.225122   72304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.229920   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229940   72304 pod_ready.go:81] duration metric: took 4.794976ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.229948   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229954   72304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.234362   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234380   72304 pod_ready.go:81] duration metric: took 4.417955ms for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.234388   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234394   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.238885   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238904   72304 pod_ready.go:81] duration metric: took 4.504378ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.238917   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238924   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.314420   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314446   72304 pod_ready.go:81] duration metric: took 75.511589ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.314457   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314464   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.714128   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714165   72304 pod_ready.go:81] duration metric: took 399.694231ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.714178   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714187   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.113925   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113958   72304 pod_ready.go:81] duration metric: took 399.760651ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.113971   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113977   72304 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.514107   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514132   72304 pod_ready.go:81] duration metric: took 400.147308ms for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.514142   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514149   72304 pod_ready.go:38] duration metric: took 1.296270699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:29.514167   72304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:03:29.528766   72304 ops.go:34] apiserver oom_adj: -16
	I0425 20:03:29.528791   72304 kubeadm.go:591] duration metric: took 10.650540723s to restartPrimaryControlPlane
	I0425 20:03:29.528801   72304 kubeadm.go:393] duration metric: took 10.713975851s to StartCluster
	I0425 20:03:29.528816   72304 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.528887   72304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:29.530674   72304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.530951   72304 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:03:29.532792   72304 out.go:177] * Verifying Kubernetes components...
	I0425 20:03:29.531039   72304 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:03:29.531203   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:29.534328   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:29.534349   72304 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534377   72304 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534383   72304 addons.go:243] addon metrics-server should already be in state true
	I0425 20:03:29.534331   72304 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534416   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534441   72304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142196"
	I0425 20:03:29.534334   72304 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534536   72304 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534549   72304 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:03:29.534584   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534786   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534814   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534839   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534815   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534956   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.535000   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.551165   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0425 20:03:29.551680   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552007   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0425 20:03:29.552399   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.552419   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.552445   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552864   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553003   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.553028   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.553066   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.553409   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553621   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0425 20:03:29.554006   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.554024   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.554057   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.554555   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.554579   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.554908   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.555432   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.555487   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.557216   72304 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.557238   72304 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:03:29.557267   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.557642   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.557675   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.570559   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0425 20:03:29.571013   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.571538   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.571562   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.571944   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.572152   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.574003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.576061   72304 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:03:29.575108   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33777
	I0425 20:03:29.575580   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0425 20:03:29.577356   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:03:29.577374   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:03:29.577394   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.577861   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.577964   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.578333   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578356   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578514   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578543   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578735   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578909   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578947   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.579603   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.579633   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.580871   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.582436   72304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:29.581297   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.581851   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.583941   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.583971   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.583994   72304 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.584021   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:03:29.584031   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.584044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.584282   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.584430   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.586538   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.586880   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.586901   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.587119   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.587314   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.587470   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.587560   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.595882   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0425 20:03:29.596234   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.596711   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.596728   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.597146   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.597321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.598599   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.598799   72304 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:29.598811   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:03:29.598822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.600829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.601149   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.601409   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.601479   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.601537   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.772228   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:29.799159   72304 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:29.893622   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:03:29.893647   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:03:29.895090   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.919651   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:03:29.919673   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:03:29.929992   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:30.004488   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:30.004519   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:03:30.061525   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.113425632s)
	I0425 20:03:31.043511   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.148338843s)
	I0425 20:03:31.043539   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043587   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043524   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043629   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043894   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043910   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043946   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.043953   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043964   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043973   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043992   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044107   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044159   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044199   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044209   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044219   72304 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-142196"
	I0425 20:03:31.044216   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044237   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044253   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044262   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044542   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044566   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044662   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044682   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.052429   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.052451   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.052675   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.052694   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.055680   72304 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0425 20:03:31.057271   72304 addons.go:505] duration metric: took 1.526243989s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0425 20:03:32.187768   71966 start.go:364] duration metric: took 56.585448027s to acquireMachinesLock for "embed-certs-512173"
	I0425 20:03:32.187838   71966 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:32.187849   71966 fix.go:54] fixHost starting: 
	I0425 20:03:32.188220   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:32.188266   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:32.207172   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0425 20:03:32.207627   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:32.208170   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:03:32.208196   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:32.208493   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:32.208700   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:32.208837   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:03:32.210552   71966 fix.go:112] recreateIfNeeded on embed-certs-512173: state=Stopped err=<nil>
	I0425 20:03:32.210577   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	W0425 20:03:32.210741   71966 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:32.213400   71966 out.go:177] * Restarting existing kvm2 VM for "embed-certs-512173" ...
	I0425 20:03:30.803467   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804014   72712 main.go:141] libmachine: (old-k8s-version-210442) Found IP for machine: 192.168.61.136
	I0425 20:03:30.804041   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserving static IP address...
	I0425 20:03:30.804057   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has current primary IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804495   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.804535   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | skip adding static IP to network mk-old-k8s-version-210442 - found existing host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"}
	I0425 20:03:30.804562   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserved static IP address: 192.168.61.136
	I0425 20:03:30.804582   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting for SSH to be available...
	I0425 20:03:30.804599   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Getting to WaitForSSH function...
	I0425 20:03:30.807110   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807533   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.807556   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807706   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH client type: external
	I0425 20:03:30.807725   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa (-rw-------)
	I0425 20:03:30.807767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:30.807783   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | About to run SSH command:
	I0425 20:03:30.807815   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | exit 0
	I0425 20:03:30.935091   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:30.935445   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetConfigRaw
	I0425 20:03:30.936168   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:30.938767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939193   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.939246   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939428   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 20:03:30.939630   72712 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:30.939649   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:30.939870   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:30.942320   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.942771   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942923   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:30.943113   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943306   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943468   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:30.943640   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:30.943842   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:30.943854   72712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:31.052598   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:31.052625   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.052821   72712 buildroot.go:166] provisioning hostname "old-k8s-version-210442"
	I0425 20:03:31.052844   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.053080   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.056324   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056713   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.056745   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056885   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.057056   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057190   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057375   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.057549   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.057724   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.057742   72712 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210442 && echo "old-k8s-version-210442" | sudo tee /etc/hostname
	I0425 20:03:31.188461   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210442
	
	I0425 20:03:31.188494   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.191628   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192088   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.192117   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192332   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.192519   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192655   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192767   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.192944   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.193142   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.193167   72712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:31.317374   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:31.317402   72712 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:31.317436   72712 buildroot.go:174] setting up certificates
	I0425 20:03:31.317447   72712 provision.go:84] configureAuth start
	I0425 20:03:31.317461   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.317778   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:31.321012   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321388   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.321421   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321698   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.323976   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324326   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.324354   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324523   72712 provision.go:143] copyHostCerts
	I0425 20:03:31.324573   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:31.324584   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:31.324656   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:31.324764   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:31.324778   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:31.324807   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:31.324879   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:31.324890   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:31.324915   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:31.324978   72712 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210442 san=[127.0.0.1 192.168.61.136 localhost minikube old-k8s-version-210442]
	I0425 20:03:31.410674   72712 provision.go:177] copyRemoteCerts
	I0425 20:03:31.410728   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:31.410755   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.413170   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413449   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.413491   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413634   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.413832   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.413988   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.414156   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:31.502759   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:31.536662   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0425 20:03:31.565106   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:31.593254   72712 provision.go:87] duration metric: took 275.793443ms to configureAuth
	I0425 20:03:31.593287   72712 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:31.593621   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 20:03:31.593720   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.596515   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.596827   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.596859   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.597057   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.597287   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597448   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597624   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.597775   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.597927   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.597942   72712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:31.925149   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:31.925182   72712 machine.go:97] duration metric: took 985.540626ms to provisionDockerMachine
	I0425 20:03:31.925199   72712 start.go:293] postStartSetup for "old-k8s-version-210442" (driver="kvm2")
	I0425 20:03:31.925211   72712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:31.925258   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:31.925560   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:31.925596   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.928532   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.928982   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.929013   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.929232   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.929458   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.929637   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.929787   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.023009   72712 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:32.029391   72712 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:32.029426   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:32.029508   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:32.029576   72712 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:32.029664   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:32.046596   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:32.077323   72712 start.go:296] duration metric: took 152.112632ms for postStartSetup
	I0425 20:03:32.077396   72712 fix.go:56] duration metric: took 20.821829703s for fixHost
	I0425 20:03:32.077425   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.080136   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080477   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.080526   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080636   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.080836   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081067   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081283   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.081493   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:32.081695   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:32.081711   72712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:32.187617   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075412.163072845
	
	I0425 20:03:32.187642   72712 fix.go:216] guest clock: 1714075412.163072845
	I0425 20:03:32.187652   72712 fix.go:229] Guest: 2024-04-25 20:03:32.163072845 +0000 UTC Remote: 2024-04-25 20:03:32.07740605 +0000 UTC m=+254.767943919 (delta=85.666795ms)
	I0425 20:03:32.187675   72712 fix.go:200] guest clock delta is within tolerance: 85.666795ms
	I0425 20:03:32.187682   72712 start.go:83] releasing machines lock for "old-k8s-version-210442", held for 20.932154384s
	I0425 20:03:32.187709   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.187998   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:32.190538   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.190898   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.190932   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.191077   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191817   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191996   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.192076   72712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:32.192116   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.192208   72712 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:32.192230   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.194821   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.194988   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195191   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195212   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195334   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195368   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195500   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195673   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195677   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195847   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195866   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196063   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.196083   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196219   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.276462   72712 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:32.300979   72712 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:30.842282   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:32.843750   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.843779   72220 pod_ready.go:81] duration metric: took 8.508343704s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.843791   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850293   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.850316   72220 pod_ready.go:81] duration metric: took 6.517764ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850327   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855621   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.855657   72220 pod_ready.go:81] duration metric: took 5.31225ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855671   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860450   72220 pod_ready.go:92] pod "kube-proxy-whkbk" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.860483   72220 pod_ready.go:81] duration metric: took 4.797706ms for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860505   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865268   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.865286   72220 pod_ready.go:81] duration metric: took 4.774354ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865294   72220 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.458446   72712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:32.465434   72712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:32.465518   72712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:32.486929   72712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:32.486954   72712 start.go:494] detecting cgroup driver to use...
	I0425 20:03:32.487019   72712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:32.509425   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:32.530999   72712 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:32.531059   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:32.547280   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:32.563594   72712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:32.699207   72712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:32.875013   72712 docker.go:233] disabling docker service ...
	I0425 20:03:32.875096   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:32.897149   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:32.916105   72712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:33.071143   72712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:33.231529   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:33.252919   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:33.277388   72712 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0425 20:03:33.277457   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.290889   72712 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:33.290953   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.305488   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.319263   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.332961   72712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:33.354086   72712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:33.373431   72712 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:33.373517   72712 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:33.398458   72712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:33.418683   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:33.595555   72712 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:33.808015   72712 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:33.810391   72712 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:33.817593   72712 start.go:562] Will wait 60s for crictl version
	I0425 20:03:33.817646   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:33.823381   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:33.866310   72712 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:33.866411   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.905561   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.952764   72712 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0425 20:03:32.214679   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Start
	I0425 20:03:32.214880   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring networks are active...
	I0425 20:03:32.215746   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network default is active
	I0425 20:03:32.216106   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network mk-embed-certs-512173 is active
	I0425 20:03:32.216566   71966 main.go:141] libmachine: (embed-certs-512173) Getting domain xml...
	I0425 20:03:32.217397   71966 main.go:141] libmachine: (embed-certs-512173) Creating domain...
	I0425 20:03:33.554665   71966 main.go:141] libmachine: (embed-certs-512173) Waiting to get IP...
	I0425 20:03:33.555670   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.556123   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.556186   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.556089   73884 retry.go:31] will retry after 278.996701ms: waiting for machine to come up
	I0425 20:03:33.836750   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.837273   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.837301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.837244   73884 retry.go:31] will retry after 324.410317ms: waiting for machine to come up
	I0425 20:03:34.163017   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.163490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.163518   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.163457   73884 retry.go:31] will retry after 403.985826ms: waiting for machine to come up
	I0425 20:03:34.568824   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.569364   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.569397   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.569330   73884 retry.go:31] will retry after 427.12179ms: waiting for machine to come up
	I0425 20:03:34.998092   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.998684   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.998709   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.998646   73884 retry.go:31] will retry after 710.71475ms: waiting for machine to come up
	I0425 20:03:35.710643   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:35.711707   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:35.711736   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:35.711616   73884 retry.go:31] will retry after 806.283051ms: waiting for machine to come up
	I0425 20:03:31.803034   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:33.813548   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:35.304283   72304 node_ready.go:49] node "default-k8s-diff-port-142196" has status "Ready":"True"
	I0425 20:03:35.304311   72304 node_ready.go:38] duration metric: took 5.505123781s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:35.304323   72304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:35.311480   72304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320910   72304 pod_ready.go:92] pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:35.320938   72304 pod_ready.go:81] duration metric: took 9.425507ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320953   72304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:33.954161   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:33.957316   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.957778   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:33.957811   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.958080   72712 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:33.964467   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:33.984277   72712 kubeadm.go:877] updating cluster {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:33.984437   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 20:03:33.984499   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:34.049402   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:34.049479   72712 ssh_runner.go:195] Run: which lz4
	I0425 20:03:34.055519   72712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:34.061481   72712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:34.061522   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0425 20:03:36.271646   72712 crio.go:462] duration metric: took 2.216165414s to copy over tarball
	I0425 20:03:36.271722   72712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:34.877483   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:37.373822   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:36.519514   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:36.520052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:36.520085   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:36.519968   73884 retry.go:31] will retry after 990.986618ms: waiting for machine to come up
	I0425 20:03:37.513151   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:37.513636   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:37.513669   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:37.513574   73884 retry.go:31] will retry after 1.371471682s: waiting for machine to come up
	I0425 20:03:38.886926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:38.887491   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:38.887527   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:38.887415   73884 retry.go:31] will retry after 1.633505345s: waiting for machine to come up
	I0425 20:03:40.523438   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:40.523975   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:40.524004   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:40.523926   73884 retry.go:31] will retry after 2.280577933s: waiting for machine to come up
	I0425 20:03:37.330040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.350040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.894331   72712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.622580176s)
	I0425 20:03:39.894364   72712 crio.go:469] duration metric: took 3.62268463s to extract the tarball
	I0425 20:03:39.894373   72712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:39.965071   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:40.009534   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:40.009561   72712 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:03:40.009629   72712 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.009651   72712 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.009677   72712 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.009662   72712 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.009794   72712 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.009920   72712 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.010033   72712 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.010241   72712 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0425 20:03:40.011305   72712 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.011334   72712 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.011346   72712 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.011686   72712 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.012422   72712 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.012429   72712 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.012437   72712 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0425 20:03:40.012546   72712 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.143545   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.155203   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.157842   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.158081   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.161210   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.166515   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.181859   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0425 20:03:40.301699   72712 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0425 20:03:40.301759   72712 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.301805   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.379386   72712 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0425 20:03:40.379445   72712 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.379490   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406119   72712 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0425 20:03:40.406231   72712 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.406174   72712 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0425 20:03:40.406338   72712 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.406365   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406389   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420450   72712 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0425 20:03:40.420495   72712 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.420548   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420461   72712 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0425 20:03:40.420629   72712 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.420677   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430055   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.430110   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.430232   72712 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0425 20:03:40.430263   72712 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0425 20:03:40.430274   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.430277   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.430303   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430326   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.430389   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.582980   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0425 20:03:40.583094   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0425 20:03:40.587500   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0425 20:03:40.587564   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0425 20:03:40.587579   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0425 20:03:40.587650   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0425 20:03:40.587697   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0425 20:03:40.625942   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0425 20:03:40.941957   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:41.096086   72712 cache_images.go:92] duration metric: took 1.086507707s to LoadCachedImages
	W0425 20:03:41.096249   72712 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0425 20:03:41.096279   72712 kubeadm.go:928] updating node { 192.168.61.136 8443 v1.20.0 crio true true} ...
	I0425 20:03:41.096415   72712 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210442 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:41.096509   72712 ssh_runner.go:195] Run: crio config
	I0425 20:03:41.169311   72712 cni.go:84] Creating CNI manager for ""
	I0425 20:03:41.169341   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:41.169357   72712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:41.169397   72712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210442 NodeName:old-k8s-version-210442 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0425 20:03:41.169570   72712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210442"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:41.169639   72712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0425 20:03:41.182191   72712 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:41.182283   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:41.193546   72712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0425 20:03:41.218220   72712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:41.238647   72712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0425 20:03:41.259040   72712 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:41.263603   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:41.278007   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:41.425587   72712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:41.450990   72712 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442 for IP: 192.168.61.136
	I0425 20:03:41.451013   72712 certs.go:194] generating shared ca certs ...
	I0425 20:03:41.451034   72712 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:41.451225   72712 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:41.451307   72712 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:41.451323   72712 certs.go:256] generating profile certs ...
	I0425 20:03:41.451449   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.key
	I0425 20:03:41.451528   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key.1533c9ac
	I0425 20:03:41.451587   72712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key
	I0425 20:03:41.451789   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:41.451860   72712 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:41.451880   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:41.451915   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:41.451945   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:41.451968   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:41.452023   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:41.452870   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:41.510467   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:41.555595   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:41.606059   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:41.648206   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0425 20:03:41.690090   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:41.727674   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:41.766537   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:41.799524   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:41.828668   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:41.860964   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:41.890272   72712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:41.911787   72712 ssh_runner.go:195] Run: openssl version
	I0425 20:03:41.918926   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:41.933194   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.938995   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.939060   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.945934   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:41.959859   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:41.974906   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.980931   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.981006   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.987789   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:42.002455   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:42.016797   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023789   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023853   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.033189   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:42.047467   72712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:42.053552   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:42.063130   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:42.070290   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:42.079527   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:42.087983   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:42.096658   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:42.103477   72712 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:42.103596   72712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:42.103649   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.155980   72712 cri.go:89] found id: ""
	I0425 20:03:42.156085   72712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:42.172499   72712 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:42.172525   72712 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:42.172532   72712 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:42.172580   72712 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:42.187864   72712 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:42.188948   72712 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210442" does not appear in /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:42.189659   72712 kubeconfig.go:62] /home/jenkins/minikube-integration/18757-6355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210442" cluster setting kubeconfig missing "old-k8s-version-210442" context setting]
	I0425 20:03:42.190635   72712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:42.192402   72712 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:42.207284   72712 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.136
	I0425 20:03:42.207318   72712 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:42.207329   72712 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:42.207403   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.251184   72712 cri.go:89] found id: ""
	I0425 20:03:42.251257   72712 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:42.271727   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:42.289161   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:42.289184   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:42.289237   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:42.302492   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:42.302588   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:42.317790   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:42.329940   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:42.330002   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:42.342772   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:39.375028   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:41.871821   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.805640   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:42.806121   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:42.806148   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:42.806072   73884 retry.go:31] will retry after 2.588054599s: waiting for machine to come up
	I0425 20:03:45.395282   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:45.395712   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:45.395759   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:45.395662   73884 retry.go:31] will retry after 3.473643777s: waiting for machine to come up
	I0425 20:03:41.329479   72304 pod_ready.go:92] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.329511   72304 pod_ready.go:81] duration metric: took 6.008549199s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.329523   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335660   72304 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.335688   72304 pod_ready.go:81] duration metric: took 6.15557ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335700   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341409   72304 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.341433   72304 pod_ready.go:81] duration metric: took 5.723469ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341446   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347145   72304 pod_ready.go:92] pod "kube-proxy-bqmtp" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.347167   72304 pod_ready.go:81] duration metric: took 5.713095ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347179   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376913   72304 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.376939   72304 pod_ready.go:81] duration metric: took 29.751827ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376951   72304 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:43.383378   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:45.884869   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.356480   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:42.357280   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:42.370403   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:42.384245   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:42.384332   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:42.398271   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:42.412361   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:42.575076   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.186458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.480114   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.594128   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.707129   72712 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:43.707221   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.207406   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.707733   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.208100   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.708041   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.207966   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.707255   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:47.207754   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:43.873747   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:46.374439   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:48.871928   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:48.872457   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:48.872490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:48.872393   73884 retry.go:31] will retry after 4.148424216s: waiting for machine to come up
	I0425 20:03:48.384599   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.883246   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:47.707730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.208213   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.707685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.207879   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.707914   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.208278   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.707691   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.207600   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.707365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:52.207931   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.872282   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.872356   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.874452   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:53.022813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023343   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has current primary IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023367   71966 main.go:141] libmachine: (embed-certs-512173) Found IP for machine: 192.168.50.7
	I0425 20:03:53.023381   71966 main.go:141] libmachine: (embed-certs-512173) Reserving static IP address...
	I0425 20:03:53.023750   71966 main.go:141] libmachine: (embed-certs-512173) Reserved static IP address: 192.168.50.7
	I0425 20:03:53.023770   71966 main.go:141] libmachine: (embed-certs-512173) Waiting for SSH to be available...
	I0425 20:03:53.023791   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.023827   71966 main.go:141] libmachine: (embed-certs-512173) DBG | skip adding static IP to network mk-embed-certs-512173 - found existing host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"}
	I0425 20:03:53.023848   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Getting to WaitForSSH function...
	I0425 20:03:53.025753   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.026132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026244   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH client type: external
	I0425 20:03:53.026268   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa (-rw-------)
	I0425 20:03:53.026301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:53.026313   71966 main.go:141] libmachine: (embed-certs-512173) DBG | About to run SSH command:
	I0425 20:03:53.026325   71966 main.go:141] libmachine: (embed-certs-512173) DBG | exit 0
	I0425 20:03:53.158487   71966 main.go:141] libmachine: (embed-certs-512173) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:53.158846   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetConfigRaw
	I0425 20:03:53.159567   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.161881   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162200   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.162257   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162492   71966 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/config.json ...
	I0425 20:03:53.162658   71966 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:53.162675   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:53.162875   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.164797   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.165140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165256   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.165402   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165561   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165659   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.165815   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.165989   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.166002   71966 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:53.283185   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:53.283219   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283455   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:03:53.283480   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283690   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.286427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.286843   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286969   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.287164   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287350   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.287641   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.287881   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.287904   71966 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-512173 && echo "embed-certs-512173" | sudo tee /etc/hostname
	I0425 20:03:53.423037   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-512173
	
	I0425 20:03:53.423067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.425749   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.426140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426329   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.426501   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426640   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426747   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.426866   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.427015   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.427083   71966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-512173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-512173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-512173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:53.553687   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:53.553715   71966 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:53.553749   71966 buildroot.go:174] setting up certificates
	I0425 20:03:53.553758   71966 provision.go:84] configureAuth start
	I0425 20:03:53.553775   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.554053   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.556655   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.556995   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.557034   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.557121   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.559341   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559692   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.559718   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559897   71966 provision.go:143] copyHostCerts
	I0425 20:03:53.559970   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:53.559984   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:53.560049   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:53.560129   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:53.560136   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:53.560155   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:53.560203   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:53.560214   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:53.560233   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:53.560278   71966 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-512173 san=[127.0.0.1 192.168.50.7 embed-certs-512173 localhost minikube]
	I0425 20:03:53.621714   71966 provision.go:177] copyRemoteCerts
	I0425 20:03:53.621777   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:53.621804   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.624556   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.624883   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.624914   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.625128   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.625324   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.625458   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.625602   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:53.715477   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:03:53.743782   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:53.771468   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:53.798701   71966 provision.go:87] duration metric: took 244.92871ms to configureAuth
	I0425 20:03:53.798726   71966 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:53.798922   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:53.798991   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.801607   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.801946   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.801972   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.802187   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.802373   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802628   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.802833   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.802986   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.803000   71966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:54.117164   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:54.117193   71966 machine.go:97] duration metric: took 954.522384ms to provisionDockerMachine
	I0425 20:03:54.117207   71966 start.go:293] postStartSetup for "embed-certs-512173" (driver="kvm2")
	I0425 20:03:54.117219   71966 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:54.117238   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.117558   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:54.117591   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.120060   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.120454   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120575   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.120761   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.120891   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.121002   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.209919   71966 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:54.215633   71966 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:54.215663   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:54.215747   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:54.215860   71966 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:54.215996   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:54.227250   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:54.257169   71966 start.go:296] duration metric: took 139.949813ms for postStartSetup
	I0425 20:03:54.257212   71966 fix.go:56] duration metric: took 22.069363419s for fixHost
	I0425 20:03:54.257237   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.260255   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260588   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.260613   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260731   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.260928   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261099   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261266   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.261447   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:54.261644   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:54.261655   71966 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:54.376222   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075434.352338373
	
	I0425 20:03:54.376245   71966 fix.go:216] guest clock: 1714075434.352338373
	I0425 20:03:54.376255   71966 fix.go:229] Guest: 2024-04-25 20:03:54.352338373 +0000 UTC Remote: 2024-04-25 20:03:54.257217658 +0000 UTC m=+368.446046405 (delta=95.120715ms)
	I0425 20:03:54.376287   71966 fix.go:200] guest clock delta is within tolerance: 95.120715ms
	I0425 20:03:54.376295   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 22.188484297s
	I0425 20:03:54.376317   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.376600   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:54.379217   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379646   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.379678   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379869   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380436   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380633   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380729   71966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:54.380779   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.380857   71966 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:54.380880   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.383698   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384081   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384283   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384471   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.384610   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.384647   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384683   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384781   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.384821   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384982   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.385131   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.385330   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.468506   71966 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:54.493995   71966 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:54.642719   71966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:54.649565   71966 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:54.649632   71966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:54.667526   71966 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:54.667546   71966 start.go:494] detecting cgroup driver to use...
	I0425 20:03:54.667596   71966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:54.685384   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:54.701852   71966 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:54.701905   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:54.718559   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:54.734874   71966 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:54.858325   71966 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:55.045158   71966 docker.go:233] disabling docker service ...
	I0425 20:03:55.045219   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:55.061668   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:55.076486   71966 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:55.207287   71966 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:55.352537   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:55.369470   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:55.392638   71966 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:55.392718   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.404590   71966 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:55.404655   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.416129   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.427176   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.438632   71966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:55.450725   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.462912   71966 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.485340   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.498134   71966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:55.508378   71966 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:55.508451   71966 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:55.523073   71966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:55.533901   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:55.666845   71966 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:55.828131   71966 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:55.828199   71966 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:55.833768   71966 start.go:562] Will wait 60s for crictl version
	I0425 20:03:55.833824   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:03:55.838000   71966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:55.881652   71966 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:55.881753   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.917675   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.953046   71966 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:52.884447   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:54.884538   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.707459   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.208241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.707431   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.207538   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.707289   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.207319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.707625   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.207562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.708324   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:57.207348   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.373713   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.374476   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:55.954484   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:55.957214   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957611   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:55.957638   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957832   71966 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:55.962420   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:55.976512   71966 kubeadm.go:877] updating cluster {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:55.976626   71966 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:55.976694   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:56.019881   71966 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:56.019942   71966 ssh_runner.go:195] Run: which lz4
	I0425 20:03:56.024524   71966 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:56.029297   71966 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:56.029339   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:57.736602   71966 crio.go:462] duration metric: took 1.712117844s to copy over tarball
	I0425 20:03:57.736666   71966 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:04:00.331696   71966 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.594977915s)
	I0425 20:04:00.331739   71966 crio.go:469] duration metric: took 2.595109768s to extract the tarball
	I0425 20:04:00.331751   71966 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:04:00.375437   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:04:00.430963   71966 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:04:00.430987   71966 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:04:00.430994   71966 kubeadm.go:928] updating node { 192.168.50.7 8443 v1.30.0 crio true true} ...
	I0425 20:04:00.431081   71966 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-512173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:04:00.431154   71966 ssh_runner.go:195] Run: crio config
	I0425 20:04:00.487082   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:00.487106   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:00.487117   71966 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:04:00.487135   71966 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-512173 NodeName:embed-certs-512173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:04:00.487306   71966 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-512173"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:04:00.487378   71966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:04:00.498819   71966 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:04:00.498881   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:04:00.509212   71966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0425 20:04:00.527703   71966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:04:00.546867   71966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0425 20:04:00.566302   71966 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0425 20:04:00.570629   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:04:00.584123   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:00.717589   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:00.743108   71966 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173 for IP: 192.168.50.7
	I0425 20:04:00.743173   71966 certs.go:194] generating shared ca certs ...
	I0425 20:04:00.743201   71966 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:00.743397   71966 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:04:00.743462   71966 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:04:00.743480   71966 certs.go:256] generating profile certs ...
	I0425 20:04:00.743644   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/client.key
	I0425 20:04:00.743729   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key.4a0c231f
	I0425 20:04:00.743789   71966 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key
	I0425 20:04:00.743964   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:04:00.744019   71966 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:04:00.744033   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:04:00.744064   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:04:00.744093   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:04:00.744117   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:04:00.744158   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:04:00.745130   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:04:00.797856   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:04:00.848631   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:56.885355   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:58.885857   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.707868   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.208319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.207410   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.707562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.208006   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.708245   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.208178   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.707239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:02.207926   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.873851   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.372919   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:00.877499   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:04:01.210716   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0425 20:04:01.239562   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:04:01.267356   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:04:01.295649   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:04:01.323739   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:04:01.350440   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:04:01.379693   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:04:01.409347   71966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:04:01.429857   71966 ssh_runner.go:195] Run: openssl version
	I0425 20:04:01.437636   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:04:01.449656   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455022   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455074   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.461442   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:04:01.473323   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:04:01.485988   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491661   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491719   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.498567   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:04:01.510983   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:04:01.523098   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528619   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528667   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.535129   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:04:01.546668   71966 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:04:01.552076   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:04:01.558928   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:04:01.566406   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:04:01.574761   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:04:01.581250   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:04:01.588506   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:04:01.594844   71966 kubeadm.go:391] StartCluster: {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:04:01.594917   71966 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:04:01.594978   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.648050   71966 cri.go:89] found id: ""
	I0425 20:04:01.648155   71966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:04:01.664291   71966 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:04:01.664318   71966 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:04:01.664325   71966 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:04:01.664387   71966 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:04:01.678686   71966 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:04:01.680096   71966 kubeconfig.go:125] found "embed-certs-512173" server: "https://192.168.50.7:8443"
	I0425 20:04:01.682375   71966 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:04:01.699073   71966 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0425 20:04:01.699109   71966 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:04:01.699122   71966 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:04:01.699190   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.744556   71966 cri.go:89] found id: ""
	I0425 20:04:01.744633   71966 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:04:01.767121   71966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:04:01.778499   71966 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:04:01.778517   71966 kubeadm.go:156] found existing configuration files:
	
	I0425 20:04:01.778575   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:04:01.789171   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:04:01.789242   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:04:01.800000   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:04:01.811015   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:04:01.811078   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:04:01.821752   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.832900   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:04:01.832962   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.844058   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:04:01.854774   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:04:01.854824   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:04:01.866086   71966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:04:01.879229   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.180778   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.971467   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.202841   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.286951   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.412260   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:04:03.412375   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.913176   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.413418   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.443763   71966 api_server.go:72] duration metric: took 1.031501246s to wait for apiserver process to appear ...
	I0425 20:04:04.443796   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:04:04.443816   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:04.444334   71966 api_server.go:269] stopped: https://192.168.50.7:8443/healthz: Get "https://192.168.50.7:8443/healthz": dial tcp 192.168.50.7:8443: connect: connection refused
	I0425 20:04:04.943937   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:01.384590   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:03.885859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.707796   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.207913   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.708267   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.207491   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.707894   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.207346   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.707801   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.208283   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.707342   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:07.208190   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.381611   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:06.875270   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.463721   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.463767   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.463785   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.479254   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.479283   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.944812   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.949683   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:07.949710   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.444237   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.451663   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.451706   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.944231   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.949165   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.949194   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.444776   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.449703   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.449732   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.943865   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.948474   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.948509   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.444040   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.448740   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:10.448781   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.944487   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.950181   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:04:10.957455   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:04:10.957479   71966 api_server.go:131] duration metric: took 6.513676295s to wait for apiserver health ...
	I0425 20:04:10.957487   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:10.957496   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:10.959196   71966 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:04:06.384595   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:08.883972   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.707466   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.207370   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.707951   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.207604   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.708057   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.207422   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.707391   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.207510   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.707828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:12.207519   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.960795   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:04:10.977005   71966 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:04:11.001393   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:04:11.021408   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:04:11.021439   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:04:11.021453   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:04:11.021466   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:04:11.021478   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:04:11.021495   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:04:11.021502   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:04:11.021513   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:04:11.021521   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:04:11.021533   71966 system_pods.go:74] duration metric: took 20.120592ms to wait for pod list to return data ...
	I0425 20:04:11.021540   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:04:11.025328   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:04:11.025360   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:04:11.025374   71966 node_conditions.go:105] duration metric: took 3.826846ms to run NodePressure ...
	I0425 20:04:11.025394   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:11.304673   71966 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309061   71966 kubeadm.go:733] kubelet initialised
	I0425 20:04:11.309082   71966 kubeadm.go:734] duration metric: took 4.385794ms waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309089   71966 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:11.314583   71966 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.319490   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319515   71966 pod_ready.go:81] duration metric: took 4.900118ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.319524   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319534   71966 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.324084   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324101   71966 pod_ready.go:81] duration metric: took 4.557199ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.324108   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324113   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.328151   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328167   71966 pod_ready.go:81] duration metric: took 4.047894ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.328174   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328184   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.404944   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.404982   71966 pod_ready.go:81] duration metric: took 76.789573ms for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.404997   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.405006   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.805191   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805221   71966 pod_ready.go:81] duration metric: took 400.202708ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.805238   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805248   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.205817   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205847   71966 pod_ready.go:81] duration metric: took 400.591033ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.205858   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205866   71966 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.605705   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605736   71966 pod_ready.go:81] duration metric: took 399.849241ms for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.605745   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605754   71966 pod_ready.go:38] duration metric: took 1.29665644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:12.605776   71966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:04:12.620368   71966 ops.go:34] apiserver oom_adj: -16
	I0425 20:04:12.620397   71966 kubeadm.go:591] duration metric: took 10.956065292s to restartPrimaryControlPlane
	I0425 20:04:12.620405   71966 kubeadm.go:393] duration metric: took 11.025567867s to StartCluster
	I0425 20:04:12.620419   71966 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.620492   71966 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:04:12.623272   71966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.623577   71966 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:04:12.625335   71966 out.go:177] * Verifying Kubernetes components...
	I0425 20:04:12.623608   71966 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:04:12.623775   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:04:12.626619   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:12.626625   71966 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-512173"
	I0425 20:04:12.626642   71966 addons.go:69] Setting metrics-server=true in profile "embed-certs-512173"
	I0425 20:04:12.626664   71966 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-512173"
	W0425 20:04:12.626674   71966 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:04:12.626681   71966 addons.go:234] Setting addon metrics-server=true in "embed-certs-512173"
	W0425 20:04:12.626690   71966 addons.go:243] addon metrics-server should already be in state true
	I0425 20:04:12.626623   71966 addons.go:69] Setting default-storageclass=true in profile "embed-certs-512173"
	I0425 20:04:12.626709   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626714   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626718   71966 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-512173"
	I0425 20:04:12.626985   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627013   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627020   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627035   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627088   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627130   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.642680   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0425 20:04:12.642798   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0425 20:04:12.642972   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0425 20:04:12.643182   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643288   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643418   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643671   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643696   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643871   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643884   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643893   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643915   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.644227   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644235   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644403   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.644431   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644819   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.644942   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.644980   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.645022   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.647992   71966 addons.go:234] Setting addon default-storageclass=true in "embed-certs-512173"
	W0425 20:04:12.648011   71966 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:04:12.648045   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.648393   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.648429   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.660989   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41421
	I0425 20:04:12.661534   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.662561   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.662592   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.662614   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0425 20:04:12.662804   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0425 20:04:12.662947   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663016   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663116   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.663173   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663515   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663547   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663585   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663604   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663882   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663920   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.664096   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.664487   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.664506   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.665031   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.667087   71966 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:04:12.668326   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:04:12.668343   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:04:12.668361   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.666460   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.669907   71966 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:04:09.373628   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:11.376301   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.671391   71966 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.671411   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:04:12.671427   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.671566   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672113   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.672132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672233   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.672353   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.672439   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.672525   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.674511   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.674926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.674951   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.675178   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.675357   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.675505   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.675662   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.683720   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0425 20:04:12.684195   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.684736   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.684755   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.685100   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.685282   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.687009   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.687257   71966 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.687277   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:04:12.687325   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.689958   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690356   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.690374   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690446   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.690655   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.690841   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.690989   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.846840   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:12.865045   71966 node_ready.go:35] waiting up to 6m0s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:12.938848   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:04:12.938875   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:04:12.941038   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.959316   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.977813   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:04:12.977841   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:04:13.050586   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:13.050610   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:04:13.111207   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:14.253195   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.31212607s)
	I0425 20:04:14.253252   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253247   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.293897647s)
	I0425 20:04:14.253268   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253303   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253371   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253625   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253641   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253650   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253656   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253677   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253690   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253699   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253711   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253876   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254099   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253911   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253949   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253977   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254193   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.260565   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.260584   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.260830   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.260850   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.342979   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.231720554s)
	I0425 20:04:14.343042   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343349   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.343358   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343374   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343390   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343398   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343602   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343623   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343633   71966 addons.go:470] Verifying addon metrics-server=true in "embed-certs-512173"
	I0425 20:04:14.346631   71966 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:04:14.347936   71966 addons.go:505] duration metric: took 1.724328435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:04:14.869074   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.383960   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:13.384840   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.883656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.707816   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.207561   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.708264   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.207822   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.707509   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.207507   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.707899   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.208254   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.708246   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:17.207508   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.873212   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.873263   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:18.373183   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:16.870001   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:18.368960   71966 node_ready.go:49] node "embed-certs-512173" has status "Ready":"True"
	I0425 20:04:18.368991   71966 node_ready.go:38] duration metric: took 5.503919958s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:18.369003   71966 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:18.375440   71966 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380902   71966 pod_ready.go:92] pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.380920   71966 pod_ready.go:81] duration metric: took 5.456921ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380928   71966 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386330   71966 pod_ready.go:92] pod "etcd-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.386386   71966 pod_ready.go:81] duration metric: took 5.451019ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386402   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391115   71966 pod_ready.go:92] pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.391138   71966 pod_ready.go:81] duration metric: took 4.727835ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391149   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:20.398316   71966 pod_ready.go:102] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.885191   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:20.384439   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.707948   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.207953   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.707659   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.207609   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.707567   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.207989   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.707938   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.208305   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.707827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:22.207940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.374376   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.873180   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.899221   71966 pod_ready.go:92] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.899240   71966 pod_ready.go:81] duration metric: took 4.508083804s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.899250   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904904   71966 pod_ready.go:92] pod "kube-proxy-8247p" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.904922   71966 pod_ready.go:81] duration metric: took 5.665557ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904929   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910035   71966 pod_ready.go:92] pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.910051   71966 pod_ready.go:81] duration metric: took 5.116298ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910059   71966 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:24.919233   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.884480   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:25.384287   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.707381   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.207532   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.707461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.208239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.707742   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.208365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.707323   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.207485   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.707727   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:27.208332   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.373538   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.872428   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.420297   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.918808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.385722   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.883321   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.707275   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.207776   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.708096   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.207685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.708249   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.207647   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.707943   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.207471   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.707902   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:32.207582   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.872576   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.372818   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.416593   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:34.416976   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:31.884120   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:33.885341   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:35.886190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.708066   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.208090   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.707474   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.207664   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.708110   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.208160   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.707940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.207505   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.708334   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:37.207939   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.375813   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.873166   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.417945   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.916796   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.384530   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.384673   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:37.707256   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.207621   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.708237   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.208327   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.707542   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.207371   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.708300   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.207577   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.708097   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:42.207684   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.876272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:41.372217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.918223   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:43.420086   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.389390   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:44.885243   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.708257   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.207407   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.707548   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:43.707618   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:43.753656   72712 cri.go:89] found id: ""
	I0425 20:04:43.753686   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.753698   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:43.753706   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:43.753770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:43.797957   72712 cri.go:89] found id: ""
	I0425 20:04:43.797982   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.797991   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:43.797996   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:43.798051   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:43.836700   72712 cri.go:89] found id: ""
	I0425 20:04:43.836729   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.836737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:43.836742   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:43.836795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:43.883452   72712 cri.go:89] found id: ""
	I0425 20:04:43.883478   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.883486   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:43.883492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:43.883544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:43.929975   72712 cri.go:89] found id: ""
	I0425 20:04:43.930004   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.930014   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:43.930022   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:43.930089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:43.967648   72712 cri.go:89] found id: ""
	I0425 20:04:43.967681   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.967693   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:43.967701   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:43.967758   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:44.011024   72712 cri.go:89] found id: ""
	I0425 20:04:44.011048   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.011072   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:44.011078   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:44.011129   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:44.050233   72712 cri.go:89] found id: ""
	I0425 20:04:44.050263   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.050274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:44.050286   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:44.050302   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:44.196275   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:44.196307   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:44.196323   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:44.260707   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:44.260748   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:44.306051   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:44.306090   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:44.357643   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:44.357682   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:46.875982   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:46.890987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:46.891062   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:46.935855   72712 cri.go:89] found id: ""
	I0425 20:04:46.935878   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.935885   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:46.935891   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:46.935948   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:46.978634   72712 cri.go:89] found id: ""
	I0425 20:04:46.978662   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.978674   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:46.978681   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:46.978749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:47.019845   72712 cri.go:89] found id: ""
	I0425 20:04:47.019864   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.019872   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:47.019877   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:47.019933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:47.065002   72712 cri.go:89] found id: ""
	I0425 20:04:47.065040   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.065064   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:47.065072   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:47.065139   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:47.106370   72712 cri.go:89] found id: ""
	I0425 20:04:47.106404   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.106416   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:47.106423   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:47.106483   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:47.143851   72712 cri.go:89] found id: ""
	I0425 20:04:47.143874   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.143883   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:47.143888   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:47.143932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:47.186130   72712 cri.go:89] found id: ""
	I0425 20:04:47.186160   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.186168   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:47.186174   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:47.186238   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:47.228959   72712 cri.go:89] found id: ""
	I0425 20:04:47.228984   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.228992   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:47.229000   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:47.229010   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:47.299852   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:47.299893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:47.346078   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:47.346111   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:43.872670   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:46.373259   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:45.917948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.919494   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:50.420952   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.388353   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:49.884300   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.405897   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:47.405932   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:47.424426   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:47.424455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:47.506603   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.007697   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:50.023258   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:50.023333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:50.066794   72712 cri.go:89] found id: ""
	I0425 20:04:50.066827   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.066836   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:50.066842   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:50.066913   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:50.109167   72712 cri.go:89] found id: ""
	I0425 20:04:50.109200   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.109212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:50.109219   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:50.109306   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:50.151854   72712 cri.go:89] found id: ""
	I0425 20:04:50.151878   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.151886   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:50.151892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:50.151940   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:50.190600   72712 cri.go:89] found id: ""
	I0425 20:04:50.190632   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.190644   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:50.190672   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:50.190742   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:50.232851   72712 cri.go:89] found id: ""
	I0425 20:04:50.232874   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.232883   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:50.232889   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:50.232935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:50.274941   72712 cri.go:89] found id: ""
	I0425 20:04:50.274971   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.274983   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:50.274990   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:50.275069   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:50.320954   72712 cri.go:89] found id: ""
	I0425 20:04:50.320981   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.320992   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:50.320999   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:50.321068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:50.361799   72712 cri.go:89] found id: ""
	I0425 20:04:50.361829   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.361839   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:50.361847   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:50.361858   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:50.457792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.457819   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:50.457834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:50.539653   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:50.539702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:50.598740   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:50.598774   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:50.650501   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:50.650533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:48.872490   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.374484   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:52.919420   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:55.420126   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.887536   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:54.389174   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:53.167827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:53.183324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:53.183403   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:53.227598   72712 cri.go:89] found id: ""
	I0425 20:04:53.227641   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.227650   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:53.227655   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:53.227700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:53.271170   72712 cri.go:89] found id: ""
	I0425 20:04:53.271200   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.271212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:53.271220   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:53.271304   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:53.318185   72712 cri.go:89] found id: ""
	I0425 20:04:53.318233   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.318246   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:53.318255   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:53.318324   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:53.372199   72712 cri.go:89] found id: ""
	I0425 20:04:53.372228   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.372238   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:53.372244   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:53.372367   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:53.414048   72712 cri.go:89] found id: ""
	I0425 20:04:53.414080   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.414091   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:53.414099   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:53.414170   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:53.455746   72712 cri.go:89] found id: ""
	I0425 20:04:53.455806   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.455819   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:53.455827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:53.455901   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:53.497969   72712 cri.go:89] found id: ""
	I0425 20:04:53.497996   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.498004   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:53.498011   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:53.498057   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:53.543642   72712 cri.go:89] found id: ""
	I0425 20:04:53.543668   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.543675   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:53.543684   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:53.543693   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:53.596106   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:53.596144   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:53.612755   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:53.612787   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:53.693068   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:53.693089   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:53.693102   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:53.771499   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:53.771535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:56.322663   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:56.336866   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:56.336945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:56.375515   72712 cri.go:89] found id: ""
	I0425 20:04:56.375556   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.375567   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:56.375574   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:56.375641   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:56.423230   72712 cri.go:89] found id: ""
	I0425 20:04:56.423261   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.423273   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:56.423281   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:56.423366   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:56.467786   72712 cri.go:89] found id: ""
	I0425 20:04:56.467814   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.467835   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:56.467842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:56.467895   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:56.517671   72712 cri.go:89] found id: ""
	I0425 20:04:56.517696   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.517708   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:56.517715   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:56.517770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:56.558622   72712 cri.go:89] found id: ""
	I0425 20:04:56.558651   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.558662   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:56.558669   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:56.558746   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:56.601350   72712 cri.go:89] found id: ""
	I0425 20:04:56.601374   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.601382   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:56.601387   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:56.601444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:56.645892   72712 cri.go:89] found id: ""
	I0425 20:04:56.645923   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.645934   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:56.645940   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:56.646001   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:56.691619   72712 cri.go:89] found id: ""
	I0425 20:04:56.691645   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.691656   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:56.691665   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:56.691679   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:56.744854   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:56.744891   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:56.762523   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:56.762556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:56.843396   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:56.843422   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:56.843437   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:56.933785   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:56.933825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:53.872514   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.372956   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:58.373649   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:57.917208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.920979   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.884907   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.385506   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.481512   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:59.497510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:59.497588   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:59.547382   72712 cri.go:89] found id: ""
	I0425 20:04:59.547412   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.547423   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:59.547432   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:59.547486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:59.597671   72712 cri.go:89] found id: ""
	I0425 20:04:59.597699   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.597711   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:59.597717   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:59.597762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:59.641455   72712 cri.go:89] found id: ""
	I0425 20:04:59.641486   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.641497   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:59.641503   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:59.641613   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:59.685052   72712 cri.go:89] found id: ""
	I0425 20:04:59.685092   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.685104   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:59.685112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:59.685173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:59.735912   72712 cri.go:89] found id: ""
	I0425 20:04:59.735943   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.735951   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:59.735957   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:59.736025   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:59.799294   72712 cri.go:89] found id: ""
	I0425 20:04:59.799322   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.799332   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:59.799338   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:59.799395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:59.871270   72712 cri.go:89] found id: ""
	I0425 20:04:59.871297   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.871308   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:59.871315   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:59.871377   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:59.919001   72712 cri.go:89] found id: ""
	I0425 20:04:59.919091   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.919110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:59.919120   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:59.919135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:59.973458   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:59.973498   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:59.989729   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:59.989757   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:00.072887   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:00.072911   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:00.072926   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:00.153886   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:00.153921   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:00.873812   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.372969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.417960   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:04.420353   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:01.885238   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.887277   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:02.722771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:02.722831   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:02.770101   72712 cri.go:89] found id: ""
	I0425 20:05:02.770134   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.770147   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:02.770154   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:02.770224   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:02.817819   72712 cri.go:89] found id: ""
	I0425 20:05:02.817854   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.817865   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:02.817898   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:02.817963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:02.857036   72712 cri.go:89] found id: ""
	I0425 20:05:02.857066   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.857077   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:02.857085   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:02.857144   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:02.900112   72712 cri.go:89] found id: ""
	I0425 20:05:02.900145   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.900157   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:02.900164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:02.900221   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:02.941079   72712 cri.go:89] found id: ""
	I0425 20:05:02.941109   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.941116   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:02.941121   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:02.941198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:02.983458   72712 cri.go:89] found id: ""
	I0425 20:05:02.983490   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.983502   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:02.983510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:02.983574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:03.025424   72712 cri.go:89] found id: ""
	I0425 20:05:03.025451   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.025462   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:03.025469   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:03.025556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:03.065285   72712 cri.go:89] found id: ""
	I0425 20:05:03.065316   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.065328   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:03.065340   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:03.065351   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:03.121235   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:03.121267   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:03.138036   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:03.138073   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:03.213604   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:03.213638   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:03.213655   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:03.296696   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:03.296741   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.842642   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:05.859125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:05.859199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:05.906505   72712 cri.go:89] found id: ""
	I0425 20:05:05.906529   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.906537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:05.906542   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:05.906595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:05.950793   72712 cri.go:89] found id: ""
	I0425 20:05:05.950819   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.950831   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:05.950838   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:05.950902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:05.991612   72712 cri.go:89] found id: ""
	I0425 20:05:05.991644   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.991654   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:05.991661   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:05.991755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:06.032273   72712 cri.go:89] found id: ""
	I0425 20:05:06.032314   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.032326   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:06.032334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:06.032392   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:06.071802   72712 cri.go:89] found id: ""
	I0425 20:05:06.071833   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.071844   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:06.071852   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:06.071908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:06.116676   72712 cri.go:89] found id: ""
	I0425 20:05:06.116702   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.116710   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:06.116716   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:06.116759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:06.154720   72712 cri.go:89] found id: ""
	I0425 20:05:06.154753   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.154765   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:06.154771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:06.154842   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:06.196421   72712 cri.go:89] found id: ""
	I0425 20:05:06.196457   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.196469   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:06.196480   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:06.196493   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:06.251061   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:06.251122   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:06.267764   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:06.267799   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:06.345302   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:06.345334   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:06.345349   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:06.427836   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:06.427868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.873928   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.372014   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.422386   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.916659   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.384700   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.883611   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:10.885814   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.989442   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:09.004493   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:09.004551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:09.056062   72712 cri.go:89] found id: ""
	I0425 20:05:09.056086   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.056096   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:09.056101   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:09.056148   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:09.096791   72712 cri.go:89] found id: ""
	I0425 20:05:09.096817   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.096827   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:09.096834   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:09.096889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:09.134649   72712 cri.go:89] found id: ""
	I0425 20:05:09.134680   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.134691   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:09.134698   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:09.134757   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:09.175980   72712 cri.go:89] found id: ""
	I0425 20:05:09.176010   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.176021   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:09.176028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:09.176084   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:09.216263   72712 cri.go:89] found id: ""
	I0425 20:05:09.216299   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.216313   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:09.216325   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:09.216395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:09.260498   72712 cri.go:89] found id: ""
	I0425 20:05:09.260528   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.260538   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:09.260544   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:09.260603   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:09.303154   72712 cri.go:89] found id: ""
	I0425 20:05:09.303178   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.303201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:09.303209   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:09.303269   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:09.350798   72712 cri.go:89] found id: ""
	I0425 20:05:09.350829   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.350840   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:09.350852   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:09.350868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:09.405295   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:09.405332   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:09.422788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:09.422820   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:09.501819   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:09.501841   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:09.501855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:09.586938   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:09.586981   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:12.132731   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:12.148860   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:12.148935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:12.194021   72712 cri.go:89] found id: ""
	I0425 20:05:12.194051   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.194064   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:12.194072   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:12.194152   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:12.234680   72712 cri.go:89] found id: ""
	I0425 20:05:12.234710   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.234721   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:12.234728   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:12.234792   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:12.277751   72712 cri.go:89] found id: ""
	I0425 20:05:12.277783   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.277794   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:12.277802   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:12.277864   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:12.324068   72712 cri.go:89] found id: ""
	I0425 20:05:12.324100   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.324117   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:12.324125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:12.324187   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:10.374594   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.873217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:11.424208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.425980   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.387259   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.884337   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.366797   72712 cri.go:89] found id: ""
	I0425 20:05:12.366825   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.366837   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:12.366844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:12.366903   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:12.413092   72712 cri.go:89] found id: ""
	I0425 20:05:12.413120   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.413132   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:12.413139   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:12.413198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:12.461229   72712 cri.go:89] found id: ""
	I0425 20:05:12.461253   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.461262   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:12.461268   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:12.461333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:12.504646   72712 cri.go:89] found id: ""
	I0425 20:05:12.504669   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.504677   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:12.504685   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:12.504698   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:12.561630   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:12.561673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:12.578043   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:12.578069   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:12.655176   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:12.655195   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:12.655209   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:12.736323   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:12.736357   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.287503   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:15.302830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:15.302893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:15.339479   72712 cri.go:89] found id: ""
	I0425 20:05:15.339509   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.339519   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:15.339527   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:15.339589   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:15.381431   72712 cri.go:89] found id: ""
	I0425 20:05:15.381458   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.381467   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:15.381475   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:15.381537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:15.423729   72712 cri.go:89] found id: ""
	I0425 20:05:15.423755   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.423767   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:15.423774   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:15.423833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:15.464367   72712 cri.go:89] found id: ""
	I0425 20:05:15.464401   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.464413   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:15.464421   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:15.464489   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:15.508306   72712 cri.go:89] found id: ""
	I0425 20:05:15.508336   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.508348   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:15.508356   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:15.508419   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:15.548572   72712 cri.go:89] found id: ""
	I0425 20:05:15.548600   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.548610   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:15.548616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:15.548678   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:15.592885   72712 cri.go:89] found id: ""
	I0425 20:05:15.592914   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.592926   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:15.592933   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:15.592992   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:15.632817   72712 cri.go:89] found id: ""
	I0425 20:05:15.632855   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.632868   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:15.632880   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:15.632900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:15.648443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:15.648470   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:15.726167   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:15.726191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:15.726229   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:15.803028   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:15.803066   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.850519   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:15.850552   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:14.873291   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:17.372118   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.917932   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.420096   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.384555   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.885930   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.404671   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:18.422600   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:18.422663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:18.476977   72712 cri.go:89] found id: ""
	I0425 20:05:18.477001   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.477009   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:18.477021   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:18.477093   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:18.525595   72712 cri.go:89] found id: ""
	I0425 20:05:18.525631   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.525641   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:18.525648   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:18.525714   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:18.565485   72712 cri.go:89] found id: ""
	I0425 20:05:18.565513   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.565523   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:18.565531   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:18.565600   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:18.612059   72712 cri.go:89] found id: ""
	I0425 20:05:18.612096   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.612106   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:18.612112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:18.612173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:18.659407   72712 cri.go:89] found id: ""
	I0425 20:05:18.659438   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.659449   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:18.659456   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:18.659507   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:18.701065   72712 cri.go:89] found id: ""
	I0425 20:05:18.701092   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.701101   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:18.701106   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:18.701201   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:18.738234   72712 cri.go:89] found id: ""
	I0425 20:05:18.738264   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.738276   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:18.738284   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:18.738343   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:18.780460   72712 cri.go:89] found id: ""
	I0425 20:05:18.780489   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.780498   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:18.780514   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:18.780526   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:18.834345   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:18.834378   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:18.850006   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:18.850033   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:18.932146   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:18.932171   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:18.932185   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:19.015036   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:19.015068   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:21.568250   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:21.582519   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:21.582595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:21.622886   72712 cri.go:89] found id: ""
	I0425 20:05:21.622913   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.622920   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:21.622925   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:21.622974   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:21.664832   72712 cri.go:89] found id: ""
	I0425 20:05:21.664860   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.664874   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:21.664882   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:21.664950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:21.703801   72712 cri.go:89] found id: ""
	I0425 20:05:21.703829   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.703843   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:21.703850   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:21.703911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:21.741502   72712 cri.go:89] found id: ""
	I0425 20:05:21.741540   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.741549   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:21.741555   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:21.741612   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:21.783715   72712 cri.go:89] found id: ""
	I0425 20:05:21.783745   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.783754   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:21.783759   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:21.783803   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:21.822806   72712 cri.go:89] found id: ""
	I0425 20:05:21.822842   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.822851   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:21.822856   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:21.822915   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:21.864996   72712 cri.go:89] found id: ""
	I0425 20:05:21.865020   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.865030   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:21.865037   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:21.865092   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:21.907533   72712 cri.go:89] found id: ""
	I0425 20:05:21.907563   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.907575   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:21.907585   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:21.907601   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:21.964226   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:21.964260   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:21.980096   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:21.980123   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:22.059516   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:22.059539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:22.059566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:22.136752   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:22.136784   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:19.373290   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:21.873377   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.916720   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:22.917156   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.918191   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:23.384566   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:25.885793   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.682139   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:24.697495   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:24.697564   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:24.739725   72712 cri.go:89] found id: ""
	I0425 20:05:24.739750   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.739760   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:24.739766   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:24.739824   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:24.777455   72712 cri.go:89] found id: ""
	I0425 20:05:24.777485   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.777497   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:24.777504   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:24.777566   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:24.821729   72712 cri.go:89] found id: ""
	I0425 20:05:24.821761   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.821774   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:24.821782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:24.821845   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:24.861745   72712 cri.go:89] found id: ""
	I0425 20:05:24.861773   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.861784   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:24.861791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:24.861851   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:24.903441   72712 cri.go:89] found id: ""
	I0425 20:05:24.903470   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.903479   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:24.903486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:24.903544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:24.943589   72712 cri.go:89] found id: ""
	I0425 20:05:24.943618   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.943629   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:24.943637   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:24.943717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:24.983629   72712 cri.go:89] found id: ""
	I0425 20:05:24.983661   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.983672   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:24.983680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:24.983739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:25.022413   72712 cri.go:89] found id: ""
	I0425 20:05:25.022441   72712 logs.go:276] 0 containers: []
	W0425 20:05:25.022451   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:25.022462   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:25.022477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:25.077402   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:25.077438   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:25.094488   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:25.094517   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:25.171485   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:25.171515   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:25.171535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:25.251131   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:25.251166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:24.373762   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:26.873969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.420395   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:29.420994   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:28.384247   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:30.883795   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.797359   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:27.813601   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:27.813659   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:27.854017   72712 cri.go:89] found id: ""
	I0425 20:05:27.854051   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.854061   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:27.854066   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:27.854117   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:27.900425   72712 cri.go:89] found id: ""
	I0425 20:05:27.900451   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.900461   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:27.900468   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:27.900531   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:27.940064   72712 cri.go:89] found id: ""
	I0425 20:05:27.940096   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.940107   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:27.940114   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:27.940174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:27.979363   72712 cri.go:89] found id: ""
	I0425 20:05:27.979385   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.979393   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:27.979399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:27.979442   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:28.019702   72712 cri.go:89] found id: ""
	I0425 20:05:28.019723   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.019731   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:28.019736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:28.019798   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:28.058711   72712 cri.go:89] found id: ""
	I0425 20:05:28.058740   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.058748   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:28.058755   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:28.058810   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:28.104465   72712 cri.go:89] found id: ""
	I0425 20:05:28.104495   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.104507   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:28.104515   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:28.104577   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:28.142399   72712 cri.go:89] found id: ""
	I0425 20:05:28.142431   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.142440   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:28.142449   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:28.142460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:28.222763   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:28.222786   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:28.222801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:28.299797   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:28.299838   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:28.366569   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:28.366594   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:28.424581   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:28.424628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:30.942526   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:30.957400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:30.957482   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:30.996931   72712 cri.go:89] found id: ""
	I0425 20:05:30.996958   72712 logs.go:276] 0 containers: []
	W0425 20:05:30.996967   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:30.996974   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:30.997029   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:31.035673   72712 cri.go:89] found id: ""
	I0425 20:05:31.035700   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.035712   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:31.035719   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:31.035782   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:31.075783   72712 cri.go:89] found id: ""
	I0425 20:05:31.075809   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.075820   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:31.075826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:31.075886   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:31.114229   72712 cri.go:89] found id: ""
	I0425 20:05:31.114257   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.114267   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:31.114274   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:31.114333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:31.155385   72712 cri.go:89] found id: ""
	I0425 20:05:31.155409   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.155419   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:31.155427   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:31.155486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:31.193772   72712 cri.go:89] found id: ""
	I0425 20:05:31.193804   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.193815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:31.193823   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:31.193878   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:31.233886   72712 cri.go:89] found id: ""
	I0425 20:05:31.233909   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.233917   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:31.233923   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:31.233967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:31.273427   72712 cri.go:89] found id: ""
	I0425 20:05:31.273455   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.273465   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:31.273476   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:31.273491   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:31.354429   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:31.354462   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:31.406018   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:31.406047   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:31.460972   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:31.461007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:31.477485   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:31.477513   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:31.551616   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:29.371357   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.373007   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.421948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.424866   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.384577   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.884780   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:34.052808   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:34.068068   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:34.068158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:34.120984   72712 cri.go:89] found id: ""
	I0425 20:05:34.121016   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.121024   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:34.121032   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:34.121082   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:34.160646   72712 cri.go:89] found id: ""
	I0425 20:05:34.160676   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.160687   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:34.160694   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:34.160752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:34.202641   72712 cri.go:89] found id: ""
	I0425 20:05:34.202665   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.202671   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:34.202677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:34.202733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:34.244352   72712 cri.go:89] found id: ""
	I0425 20:05:34.244379   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.244391   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:34.244400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:34.244460   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:34.285858   72712 cri.go:89] found id: ""
	I0425 20:05:34.285885   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.285896   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:34.285904   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:34.285956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:34.323634   72712 cri.go:89] found id: ""
	I0425 20:05:34.323662   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.323673   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:34.323681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:34.323739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:34.365230   72712 cri.go:89] found id: ""
	I0425 20:05:34.365256   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.365272   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:34.365280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:34.365339   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:34.409329   72712 cri.go:89] found id: ""
	I0425 20:05:34.409354   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.409365   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:34.409376   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:34.409390   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:34.464575   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:34.464606   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:34.480244   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:34.480270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:34.560204   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:34.560224   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:34.560236   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:34.640152   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:34.640187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:37.189992   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:37.204683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:37.204786   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:37.245857   72712 cri.go:89] found id: ""
	I0425 20:05:37.245891   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.245903   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:37.245910   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:37.245969   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:37.284668   72712 cri.go:89] found id: ""
	I0425 20:05:37.284696   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.284704   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:37.284710   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:37.284762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:37.324349   72712 cri.go:89] found id: ""
	I0425 20:05:37.324379   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.324391   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:37.324399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:37.324461   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:33.872836   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.873214   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.373278   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.917308   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.419746   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.383933   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.385166   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:37.361764   72712 cri.go:89] found id: ""
	I0425 20:05:37.361787   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.361800   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:37.361811   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:37.361857   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:37.404331   72712 cri.go:89] found id: ""
	I0425 20:05:37.404353   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.404360   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:37.404366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:37.404430   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:37.445284   72712 cri.go:89] found id: ""
	I0425 20:05:37.445316   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.445327   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:37.445334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:37.445395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:37.483806   72712 cri.go:89] found id: ""
	I0425 20:05:37.483828   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.483837   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:37.483843   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:37.483888   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:37.524649   72712 cri.go:89] found id: ""
	I0425 20:05:37.524673   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.524680   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:37.524689   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:37.524701   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:37.581521   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:37.581553   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:37.598459   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:37.598487   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:37.671236   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:37.671256   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:37.671272   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:37.750517   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:37.750556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.293743   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:40.310344   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:40.310426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:40.356157   72712 cri.go:89] found id: ""
	I0425 20:05:40.356198   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.356208   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:40.356215   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:40.356277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:40.397857   72712 cri.go:89] found id: ""
	I0425 20:05:40.397886   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.397895   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:40.397902   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:40.397964   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:40.445034   72712 cri.go:89] found id: ""
	I0425 20:05:40.445057   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.445065   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:40.445071   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:40.445126   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:40.493744   72712 cri.go:89] found id: ""
	I0425 20:05:40.493773   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.493783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:40.493797   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:40.493856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:40.550546   72712 cri.go:89] found id: ""
	I0425 20:05:40.550572   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.550580   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:40.550587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:40.550654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:40.605122   72712 cri.go:89] found id: ""
	I0425 20:05:40.605153   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.605164   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:40.605172   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:40.605232   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:40.675713   72712 cri.go:89] found id: ""
	I0425 20:05:40.675745   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.675755   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:40.675769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:40.675828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:40.716064   72712 cri.go:89] found id: ""
	I0425 20:05:40.716093   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.716101   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:40.716109   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:40.716120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:40.781395   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:40.781441   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:40.797597   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:40.797628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:40.880931   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:40.880956   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:40.880971   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:40.970770   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:40.970800   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.373398   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.873163   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.918560   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.417610   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:45.420963   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.883556   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:44.883719   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.520389   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:43.537668   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:43.537729   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:43.578137   72712 cri.go:89] found id: ""
	I0425 20:05:43.578166   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.578175   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:43.578180   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:43.578247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:43.617428   72712 cri.go:89] found id: ""
	I0425 20:05:43.617454   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.617462   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:43.617466   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:43.617519   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:43.655401   72712 cri.go:89] found id: ""
	I0425 20:05:43.655431   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.655443   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:43.655450   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:43.655514   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:43.695183   72712 cri.go:89] found id: ""
	I0425 20:05:43.695212   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.695230   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:43.695238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:43.695316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:43.735056   72712 cri.go:89] found id: ""
	I0425 20:05:43.735086   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.735098   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:43.735104   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:43.735162   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:43.774761   72712 cri.go:89] found id: ""
	I0425 20:05:43.774789   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.774799   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:43.774830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:43.774889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:43.819102   72712 cri.go:89] found id: ""
	I0425 20:05:43.819128   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.819138   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:43.819146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:43.819206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:43.858235   72712 cri.go:89] found id: ""
	I0425 20:05:43.858267   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.858278   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:43.858289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:43.858303   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:43.940756   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:43.940794   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:43.985878   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:43.985925   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:44.040177   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:44.040207   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:44.055912   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:44.055942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:44.143724   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:46.643923   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:46.658863   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:46.658941   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:46.697826   72712 cri.go:89] found id: ""
	I0425 20:05:46.697850   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.697858   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:46.697884   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:46.697947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:46.739850   72712 cri.go:89] found id: ""
	I0425 20:05:46.739877   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.739888   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:46.739897   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:46.739955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:46.781212   72712 cri.go:89] found id: ""
	I0425 20:05:46.781241   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.781256   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:46.781262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:46.781321   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:46.826005   72712 cri.go:89] found id: ""
	I0425 20:05:46.826036   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.826047   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:46.826055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:46.826109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:46.865428   72712 cri.go:89] found id: ""
	I0425 20:05:46.865456   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.865465   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:46.865472   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:46.865522   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:46.914860   72712 cri.go:89] found id: ""
	I0425 20:05:46.914887   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.914897   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:46.914907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:46.914968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:46.955323   72712 cri.go:89] found id: ""
	I0425 20:05:46.955355   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.955365   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:46.955373   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:46.955436   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:46.999369   72712 cri.go:89] found id: ""
	I0425 20:05:46.999396   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.999408   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:46.999419   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:46.999464   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:47.013865   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:47.013893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:47.094725   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:47.094755   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:47.094771   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:47.178380   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:47.178426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:47.227217   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:47.227249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:45.375272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.872640   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.917579   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.918001   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:46.884746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:48.884818   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.780217   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:49.795690   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:49.795760   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:49.834909   72712 cri.go:89] found id: ""
	I0425 20:05:49.834935   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.834943   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:49.834951   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:49.835004   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:49.872717   72712 cri.go:89] found id: ""
	I0425 20:05:49.872747   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.872755   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:49.872762   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:49.872807   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:49.919348   72712 cri.go:89] found id: ""
	I0425 20:05:49.919376   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.919387   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:49.919395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:49.919465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:49.959673   72712 cri.go:89] found id: ""
	I0425 20:05:49.959705   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.959716   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:49.959728   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:49.959796   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:49.999276   72712 cri.go:89] found id: ""
	I0425 20:05:49.999299   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.999306   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:49.999312   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:49.999361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:50.037426   72712 cri.go:89] found id: ""
	I0425 20:05:50.037454   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.037461   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:50.037466   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:50.037510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:50.080666   72712 cri.go:89] found id: ""
	I0425 20:05:50.080695   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.080703   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:50.080719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:50.080776   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:50.126065   72712 cri.go:89] found id: ""
	I0425 20:05:50.126111   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.126123   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:50.126134   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:50.126148   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:50.140778   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:50.140805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:50.213282   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:50.213308   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:50.213320   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:50.293798   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:50.293832   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:50.336823   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:50.336859   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:49.873685   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.372830   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.919781   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:54.417518   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.382698   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:53.392894   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:55.884231   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.892579   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:52.909556   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:52.909629   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:52.948098   72712 cri.go:89] found id: ""
	I0425 20:05:52.948127   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.948138   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:52.948146   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:52.948206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:52.988813   72712 cri.go:89] found id: ""
	I0425 20:05:52.988840   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.988848   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:52.988853   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:52.988898   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:53.032181   72712 cri.go:89] found id: ""
	I0425 20:05:53.032211   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.032222   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:53.032230   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:53.032288   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:53.075496   72712 cri.go:89] found id: ""
	I0425 20:05:53.075528   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.075538   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:53.075543   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:53.075599   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:53.119037   72712 cri.go:89] found id: ""
	I0425 20:05:53.119070   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.119082   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:53.119095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:53.119158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:53.158276   72712 cri.go:89] found id: ""
	I0425 20:05:53.158303   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.158314   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:53.158321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:53.158381   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:53.196168   72712 cri.go:89] found id: ""
	I0425 20:05:53.196199   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.196211   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:53.196219   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:53.196277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:53.235212   72712 cri.go:89] found id: ""
	I0425 20:05:53.235235   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.235243   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:53.235250   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:53.235261   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:53.290435   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:53.290474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:53.306351   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:53.306380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:53.388623   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:53.388652   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:53.388666   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:53.480388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:53.480426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:56.027403   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:56.042683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:56.042755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:56.083672   72712 cri.go:89] found id: ""
	I0425 20:05:56.083706   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.083718   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:56.083725   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:56.083790   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:56.124071   72712 cri.go:89] found id: ""
	I0425 20:05:56.124105   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.124126   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:56.124134   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:56.124200   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:56.166692   72712 cri.go:89] found id: ""
	I0425 20:05:56.166724   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.166737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:56.166744   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:56.166808   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:56.203833   72712 cri.go:89] found id: ""
	I0425 20:05:56.203871   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.203884   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:56.203892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:56.203950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:56.242277   72712 cri.go:89] found id: ""
	I0425 20:05:56.242319   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.242341   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:56.242349   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:56.242416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:56.281697   72712 cri.go:89] found id: ""
	I0425 20:05:56.281726   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.281733   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:56.281739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:56.281812   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:56.322190   72712 cri.go:89] found id: ""
	I0425 20:05:56.322233   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.322243   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:56.322248   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:56.322310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:56.364831   72712 cri.go:89] found id: ""
	I0425 20:05:56.364853   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.364864   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:56.364875   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:56.364889   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:56.422824   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:56.422856   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:56.437619   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:56.437641   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:56.512938   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:56.512961   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:56.512977   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:56.598670   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:56.598708   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:54.872566   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.873184   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.917352   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.421645   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:58.383740   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:00.384113   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.150322   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:59.166883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:59.166956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:59.205086   72712 cri.go:89] found id: ""
	I0425 20:05:59.205112   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.205121   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:59.205126   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:59.205199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:59.253430   72712 cri.go:89] found id: ""
	I0425 20:05:59.253458   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.253469   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:59.253478   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:59.253539   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:59.293691   72712 cri.go:89] found id: ""
	I0425 20:05:59.293719   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.293731   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:59.293738   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:59.293801   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:59.331580   72712 cri.go:89] found id: ""
	I0425 20:05:59.331604   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.331613   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:59.331619   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:59.331663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:59.369985   72712 cri.go:89] found id: ""
	I0425 20:05:59.370012   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.370023   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:59.370031   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:59.370095   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:59.411636   72712 cri.go:89] found id: ""
	I0425 20:05:59.411662   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.411670   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:59.411676   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:59.411733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:59.454735   72712 cri.go:89] found id: ""
	I0425 20:05:59.454762   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.454774   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:59.454782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:59.454839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:59.497664   72712 cri.go:89] found id: ""
	I0425 20:05:59.497694   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.497704   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:59.497715   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:59.497731   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:59.556694   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:59.556728   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:59.572160   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:59.572187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:59.649040   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:59.649063   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:59.649083   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:59.727941   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:59.727975   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.275513   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:02.290486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:02.290557   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:02.332217   72712 cri.go:89] found id: ""
	I0425 20:06:02.332255   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.332273   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:02.332281   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:02.332357   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:58.873314   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.373601   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.916947   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.418479   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.384744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.885488   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.373346   72712 cri.go:89] found id: ""
	I0425 20:06:02.373370   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.373377   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:02.373382   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:02.373439   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:02.415835   72712 cri.go:89] found id: ""
	I0425 20:06:02.415861   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.415873   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:02.415881   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:02.415939   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:02.458876   72712 cri.go:89] found id: ""
	I0425 20:06:02.458905   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.458917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:02.458926   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:02.459008   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:02.502092   72712 cri.go:89] found id: ""
	I0425 20:06:02.502127   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.502138   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:02.502146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:02.502235   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:02.546357   72712 cri.go:89] found id: ""
	I0425 20:06:02.546383   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.546393   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:02.546399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:02.546459   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:02.586842   72712 cri.go:89] found id: ""
	I0425 20:06:02.586870   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.586881   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:02.586887   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:02.586932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:02.629305   72712 cri.go:89] found id: ""
	I0425 20:06:02.629339   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.629350   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:02.629360   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:02.629374   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.676583   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:02.676626   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:02.731790   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:02.731825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:02.747473   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:02.747499   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:02.824265   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:02.824289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:02.824304   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.408968   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:05.423645   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:05.423713   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:05.467402   72712 cri.go:89] found id: ""
	I0425 20:06:05.467425   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.467434   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:05.467445   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:05.467510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:05.503131   72712 cri.go:89] found id: ""
	I0425 20:06:05.503153   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.503161   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:05.503166   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:05.503216   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:05.545694   72712 cri.go:89] found id: ""
	I0425 20:06:05.545721   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.545732   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:05.545739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:05.545804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:05.585879   72712 cri.go:89] found id: ""
	I0425 20:06:05.585905   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.585912   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:05.585917   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:05.585963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:05.625520   72712 cri.go:89] found id: ""
	I0425 20:06:05.625549   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.625560   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:05.625567   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:05.625620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:05.664306   72712 cri.go:89] found id: ""
	I0425 20:06:05.664335   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.664345   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:05.664364   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:05.664437   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:05.705353   72712 cri.go:89] found id: ""
	I0425 20:06:05.705385   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.705397   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:05.705405   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:05.705468   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:05.743935   72712 cri.go:89] found id: ""
	I0425 20:06:05.743968   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.743977   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:05.743986   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:05.743997   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:05.801190   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:05.801234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:05.817046   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:05.817074   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:05.899413   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:05.899443   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:05.899458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.986303   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:05.986336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:03.872605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:05.876833   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.373392   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.916334   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.917480   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.887784   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:09.387085   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.531748   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:08.550667   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:08.550749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:08.594062   72712 cri.go:89] found id: ""
	I0425 20:06:08.594093   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.594102   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:08.594108   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:08.594163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:08.635823   72712 cri.go:89] found id: ""
	I0425 20:06:08.635861   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.635872   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:08.635880   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:08.635944   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:08.675338   72712 cri.go:89] found id: ""
	I0425 20:06:08.675383   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.675395   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:08.675402   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:08.675463   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:08.715971   72712 cri.go:89] found id: ""
	I0425 20:06:08.716001   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.716012   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:08.716019   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:08.716088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:08.758565   72712 cri.go:89] found id: ""
	I0425 20:06:08.758597   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.758608   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:08.758616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:08.758683   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:08.800179   72712 cri.go:89] found id: ""
	I0425 20:06:08.800207   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.800218   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:08.800226   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:08.800286   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:08.854603   72712 cri.go:89] found id: ""
	I0425 20:06:08.854639   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.854651   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:08.854659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:08.854724   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:08.904115   72712 cri.go:89] found id: ""
	I0425 20:06:08.904141   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.904152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:08.904162   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:08.904177   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:08.921826   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:08.921855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:09.003667   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:09.003687   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:09.003699   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:09.086301   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:09.086346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:09.138478   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:09.138516   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:11.704402   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:11.721810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:11.721902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:11.768790   72712 cri.go:89] found id: ""
	I0425 20:06:11.768829   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.768850   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:11.768858   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:11.768928   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:11.813543   72712 cri.go:89] found id: ""
	I0425 20:06:11.813576   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.813588   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:11.813595   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:11.813654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:11.853930   72712 cri.go:89] found id: ""
	I0425 20:06:11.853962   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.853972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:11.853980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:11.854044   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:11.900808   72712 cri.go:89] found id: ""
	I0425 20:06:11.900843   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.900853   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:11.900861   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:11.900919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:11.948850   72712 cri.go:89] found id: ""
	I0425 20:06:11.948876   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.948885   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:11.948890   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:11.948945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:11.989326   72712 cri.go:89] found id: ""
	I0425 20:06:11.989356   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.989365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:11.989371   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:11.989450   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:12.033912   72712 cri.go:89] found id: ""
	I0425 20:06:12.033943   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.033954   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:12.033959   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:12.034015   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:12.076170   72712 cri.go:89] found id: ""
	I0425 20:06:12.076199   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.076209   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:12.076217   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:12.076230   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:12.124851   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:12.124881   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:12.178927   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:12.178964   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:12.194925   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:12.194952   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:12.272163   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:12.272187   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:12.272202   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:10.374908   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.871613   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:10.917911   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.918144   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:15.419043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:11.886066   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.383880   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.851400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:14.869893   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:14.869967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:14.915793   72712 cri.go:89] found id: ""
	I0425 20:06:14.915820   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.915829   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:14.915836   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:14.915896   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:14.959549   72712 cri.go:89] found id: ""
	I0425 20:06:14.959576   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.959587   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:14.959606   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:14.959672   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:15.001420   72712 cri.go:89] found id: ""
	I0425 20:06:15.001453   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.001465   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:15.001474   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:15.001552   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:15.047960   72712 cri.go:89] found id: ""
	I0425 20:06:15.047988   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.047996   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:15.048001   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:15.048049   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:15.096688   72712 cri.go:89] found id: ""
	I0425 20:06:15.096722   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.096730   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:15.096736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:15.096795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:15.142673   72712 cri.go:89] found id: ""
	I0425 20:06:15.142701   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.142712   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:15.142719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:15.142784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:15.181729   72712 cri.go:89] found id: ""
	I0425 20:06:15.181757   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.181766   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:15.181773   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:15.181820   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:15.227858   72712 cri.go:89] found id: ""
	I0425 20:06:15.227886   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.227897   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:15.227905   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:15.227917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:15.283253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:15.283293   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:15.305572   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:15.305604   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:15.439587   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:15.439615   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:15.439631   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:15.525678   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:15.525714   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:14.872914   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.873605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:17.420065   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:19.917501   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.383915   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.883746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:20.884190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.078788   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:18.095012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:18.095083   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:18.136753   72712 cri.go:89] found id: ""
	I0425 20:06:18.136784   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.136796   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:18.136802   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:18.136850   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:18.184584   72712 cri.go:89] found id: ""
	I0425 20:06:18.184606   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.184614   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:18.184619   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:18.184691   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:18.228201   72712 cri.go:89] found id: ""
	I0425 20:06:18.228250   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.228263   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:18.228270   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:18.228326   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:18.267756   72712 cri.go:89] found id: ""
	I0425 20:06:18.267778   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.267786   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:18.267792   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:18.267855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:18.309727   72712 cri.go:89] found id: ""
	I0425 20:06:18.309755   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.309763   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:18.309769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:18.309827   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:18.350549   72712 cri.go:89] found id: ""
	I0425 20:06:18.350580   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.350592   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:18.350599   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:18.350656   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:18.393868   72712 cri.go:89] found id: ""
	I0425 20:06:18.393891   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.393902   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:18.393910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:18.393989   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:18.435163   72712 cri.go:89] found id: ""
	I0425 20:06:18.435195   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.435204   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:18.435211   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:18.435224   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:18.450871   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:18.450901   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:18.534501   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:18.534526   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:18.534538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:18.616979   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:18.617015   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:18.663568   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:18.663598   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.217744   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:21.235862   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:21.235955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:21.288966   72712 cri.go:89] found id: ""
	I0425 20:06:21.288996   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.289005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:21.289014   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:21.289075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:21.362068   72712 cri.go:89] found id: ""
	I0425 20:06:21.362092   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.362101   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:21.362108   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:21.362168   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:21.416870   72712 cri.go:89] found id: ""
	I0425 20:06:21.416894   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.416901   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:21.416907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:21.416956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:21.461465   72712 cri.go:89] found id: ""
	I0425 20:06:21.461495   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.461503   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:21.461508   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:21.461570   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:21.499985   72712 cri.go:89] found id: ""
	I0425 20:06:21.500014   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.500025   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:21.500032   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:21.500081   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:21.543725   72712 cri.go:89] found id: ""
	I0425 20:06:21.543764   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.543776   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:21.543784   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:21.543841   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:21.586535   72712 cri.go:89] found id: ""
	I0425 20:06:21.586566   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.586578   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:21.586587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:21.586644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:21.627885   72712 cri.go:89] found id: ""
	I0425 20:06:21.627912   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.627921   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:21.627929   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:21.627942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.685973   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:21.686006   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:21.702529   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:21.702556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:21.781634   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:21.781660   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:21.781673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:21.862986   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:21.863027   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:19.372142   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.374479   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.918699   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.419088   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:23.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:25.883438   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.413547   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:24.428247   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:24.428323   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:24.468708   72712 cri.go:89] found id: ""
	I0425 20:06:24.468757   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.468768   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:24.468775   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:24.468836   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:24.507667   72712 cri.go:89] found id: ""
	I0425 20:06:24.507694   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.507702   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:24.507708   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:24.507769   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:24.548537   72712 cri.go:89] found id: ""
	I0425 20:06:24.548562   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.548570   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:24.548576   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:24.548625   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:24.591240   72712 cri.go:89] found id: ""
	I0425 20:06:24.591264   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.591272   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:24.591280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:24.591325   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:24.631530   72712 cri.go:89] found id: ""
	I0425 20:06:24.631557   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.631568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:24.631575   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:24.631642   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:24.672878   72712 cri.go:89] found id: ""
	I0425 20:06:24.672903   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.672911   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:24.672916   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:24.672960   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:24.716168   72712 cri.go:89] found id: ""
	I0425 20:06:24.716193   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.716201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:24.716206   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:24.716256   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:24.758061   72712 cri.go:89] found id: ""
	I0425 20:06:24.758098   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.758110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:24.758122   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:24.758135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:24.839866   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:24.839900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:24.889288   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:24.889380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:24.946445   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:24.946488   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:24.963093   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:24.963126   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:25.044921   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:23.874297   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.372055   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.375436   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.916503   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.916669   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.887709   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.384645   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.545838   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:27.562659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:27.562717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:27.606462   72712 cri.go:89] found id: ""
	I0425 20:06:27.606491   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.606501   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:27.606509   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:27.606567   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:27.650475   72712 cri.go:89] found id: ""
	I0425 20:06:27.650505   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.650517   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:27.650524   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:27.650583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:27.695163   72712 cri.go:89] found id: ""
	I0425 20:06:27.695190   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.695201   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:27.695208   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:27.695265   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:27.741798   72712 cri.go:89] found id: ""
	I0425 20:06:27.741832   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.741842   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:27.741849   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:27.741904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:27.784146   72712 cri.go:89] found id: ""
	I0425 20:06:27.784175   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.784185   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:27.784193   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:27.784253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:27.827179   72712 cri.go:89] found id: ""
	I0425 20:06:27.827213   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.827225   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:27.827234   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:27.827298   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:27.872941   72712 cri.go:89] found id: ""
	I0425 20:06:27.872961   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.872980   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:27.872985   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:27.873040   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:27.917920   72712 cri.go:89] found id: ""
	I0425 20:06:27.917949   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.917959   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:27.917970   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:27.917985   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:27.971411   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:27.971455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:27.988704   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:27.988743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:28.064208   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:28.064229   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:28.064242   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:28.147388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:28.147427   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.694349   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:30.708595   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:30.708671   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:30.752963   72712 cri.go:89] found id: ""
	I0425 20:06:30.752994   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.753005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:30.753012   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:30.753073   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:30.795453   72712 cri.go:89] found id: ""
	I0425 20:06:30.795488   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.795498   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:30.795507   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:30.795574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:30.838945   72712 cri.go:89] found id: ""
	I0425 20:06:30.838970   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.838978   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:30.838984   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:30.839042   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:30.886128   72712 cri.go:89] found id: ""
	I0425 20:06:30.886160   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.886170   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:30.886178   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:30.886255   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:30.927773   72712 cri.go:89] found id: ""
	I0425 20:06:30.927805   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.927819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:30.927827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:30.927893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:30.968628   72712 cri.go:89] found id: ""
	I0425 20:06:30.968660   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.968672   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:30.968680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:30.968743   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:31.014590   72712 cri.go:89] found id: ""
	I0425 20:06:31.014616   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.014627   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:31.014634   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:31.014697   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:31.053236   72712 cri.go:89] found id: ""
	I0425 20:06:31.053262   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.053274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:31.053285   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:31.053301   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:31.107797   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:31.107834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:31.123675   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:31.123702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:31.201180   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:31.201204   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:31.201215   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:31.289474   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:31.289512   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.873981   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.373083   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.918572   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.420043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:35.421384   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:32.883164   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:34.883697   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.840828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:33.857736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:33.857795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:33.898621   72712 cri.go:89] found id: ""
	I0425 20:06:33.898647   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.898658   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:33.898665   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:33.898727   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:33.939211   72712 cri.go:89] found id: ""
	I0425 20:06:33.939234   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.939245   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:33.939250   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:33.939305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:33.981872   72712 cri.go:89] found id: ""
	I0425 20:06:33.981896   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.981903   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:33.981909   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:33.981965   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:34.027570   72712 cri.go:89] found id: ""
	I0425 20:06:34.027597   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.027609   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:34.027617   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:34.027675   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:34.072544   72712 cri.go:89] found id: ""
	I0425 20:06:34.072570   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.072586   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:34.072594   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:34.072674   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:34.119326   72712 cri.go:89] found id: ""
	I0425 20:06:34.119349   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.119358   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:34.119366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:34.119423   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:34.169618   72712 cri.go:89] found id: ""
	I0425 20:06:34.169642   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.169650   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:34.169655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:34.169705   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:34.213570   72712 cri.go:89] found id: ""
	I0425 20:06:34.213593   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.213601   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:34.213609   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:34.213621   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:34.255722   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:34.255756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:34.311113   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:34.311147   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:34.326869   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:34.326897   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:34.399765   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:34.399788   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:34.399801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:36.986610   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:37.003090   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:37.003163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:37.045929   72712 cri.go:89] found id: ""
	I0425 20:06:37.045956   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.045964   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:37.045969   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:37.046022   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:37.086835   72712 cri.go:89] found id: ""
	I0425 20:06:37.086868   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.086879   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:37.086885   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:37.086937   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:37.127454   72712 cri.go:89] found id: ""
	I0425 20:06:37.127479   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.127488   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:37.127494   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:37.127551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:37.168878   72712 cri.go:89] found id: ""
	I0425 20:06:37.168904   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.168917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:37.168924   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:37.168986   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:37.208859   72712 cri.go:89] found id: ""
	I0425 20:06:37.208889   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.208901   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:37.208914   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:37.208970   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:37.250407   72712 cri.go:89] found id: ""
	I0425 20:06:37.250439   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.250452   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:37.250467   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:37.250536   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:37.291004   72712 cri.go:89] found id: ""
	I0425 20:06:37.291040   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.291054   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:37.291063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:37.291125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:37.335573   72712 cri.go:89] found id: ""
	I0425 20:06:37.335597   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.335608   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:37.335619   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:37.335635   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:35.873065   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.371805   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.426152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:39.916340   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:36.884518   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.884859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.392773   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:37.392810   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:37.408311   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:37.408343   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:37.491376   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:37.491402   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:37.491416   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:37.574559   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:37.574600   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.125241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:40.142254   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:40.142347   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:40.186859   72712 cri.go:89] found id: ""
	I0425 20:06:40.186893   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.186904   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:40.186911   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:40.186972   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:40.229247   72712 cri.go:89] found id: ""
	I0425 20:06:40.229275   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.229288   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:40.229295   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:40.229361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:40.268853   72712 cri.go:89] found id: ""
	I0425 20:06:40.268879   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.268890   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:40.268897   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:40.268959   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:40.307621   72712 cri.go:89] found id: ""
	I0425 20:06:40.307650   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.307669   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:40.307677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:40.307732   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:40.351448   72712 cri.go:89] found id: ""
	I0425 20:06:40.351472   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.351484   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:40.351492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:40.351548   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:40.396771   72712 cri.go:89] found id: ""
	I0425 20:06:40.396804   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.396815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:40.396824   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:40.396890   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:40.443605   72712 cri.go:89] found id: ""
	I0425 20:06:40.443634   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.443642   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:40.443647   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:40.443694   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:40.495496   72712 cri.go:89] found id: ""
	I0425 20:06:40.495525   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.495536   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:40.495548   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:40.495563   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.539428   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:40.539457   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:40.596259   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:40.596305   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:40.613140   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:40.613167   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:40.701768   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:40.701793   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:40.701805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:40.372225   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:42.373541   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.916879   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.917783   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.386292   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.885441   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.294502   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:43.310041   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:43.310113   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:43.351841   72712 cri.go:89] found id: ""
	I0425 20:06:43.351864   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.351872   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:43.351877   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:43.351924   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:43.395467   72712 cri.go:89] found id: ""
	I0425 20:06:43.395497   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.395509   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:43.395516   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:43.395576   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:43.437256   72712 cri.go:89] found id: ""
	I0425 20:06:43.437354   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.437375   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:43.437384   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:43.437465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:43.480744   72712 cri.go:89] found id: ""
	I0425 20:06:43.480772   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.480783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:43.480791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:43.480839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:43.519916   72712 cri.go:89] found id: ""
	I0425 20:06:43.519951   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.519961   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:43.519975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:43.520039   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:43.557861   72712 cri.go:89] found id: ""
	I0425 20:06:43.557890   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.557901   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:43.557910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:43.557968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:43.594423   72712 cri.go:89] found id: ""
	I0425 20:06:43.594449   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.594458   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:43.594464   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:43.594512   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:43.632227   72712 cri.go:89] found id: ""
	I0425 20:06:43.632253   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.632262   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:43.632270   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:43.632281   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:43.688307   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:43.688336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:43.703382   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:43.703407   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:43.782073   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:43.782093   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:43.782109   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:43.872811   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:43.872842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:46.420420   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:46.435110   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:46.435174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:46.474019   72712 cri.go:89] found id: ""
	I0425 20:06:46.474044   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.474054   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:46.474067   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:46.474125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:46.517053   72712 cri.go:89] found id: ""
	I0425 20:06:46.517078   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.517088   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:46.517096   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:46.517150   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:46.560934   72712 cri.go:89] found id: ""
	I0425 20:06:46.560963   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.560972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:46.560977   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:46.561030   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:46.605969   72712 cri.go:89] found id: ""
	I0425 20:06:46.605997   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.606007   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:46.606012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:46.606061   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:46.647025   72712 cri.go:89] found id: ""
	I0425 20:06:46.647049   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.647058   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:46.647063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:46.647118   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:46.686931   72712 cri.go:89] found id: ""
	I0425 20:06:46.686956   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.686966   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:46.686975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:46.687053   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:46.727183   72712 cri.go:89] found id: ""
	I0425 20:06:46.727207   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.727216   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:46.727224   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:46.727277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:46.768030   72712 cri.go:89] found id: ""
	I0425 20:06:46.768059   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.768073   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:46.768085   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:46.768105   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:46.823400   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:46.823439   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:46.838443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:46.838468   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:46.919509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:46.919527   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:46.919538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:46.996250   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:46.996284   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:44.873706   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.874042   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:45.918619   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.418507   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.384559   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.884184   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.885081   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:49.542696   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:49.557346   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:49.557444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:49.595195   72712 cri.go:89] found id: ""
	I0425 20:06:49.595220   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.595231   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:49.595238   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:49.595305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:49.641324   72712 cri.go:89] found id: ""
	I0425 20:06:49.641354   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.641365   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:49.641373   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:49.641426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:49.681510   72712 cri.go:89] found id: ""
	I0425 20:06:49.681540   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.681552   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:49.681559   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:49.681620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:49.721482   72712 cri.go:89] found id: ""
	I0425 20:06:49.721509   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.721518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:49.721525   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:49.721581   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:49.762682   72712 cri.go:89] found id: ""
	I0425 20:06:49.762710   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.762723   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:49.762731   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:49.762793   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:49.801892   72712 cri.go:89] found id: ""
	I0425 20:06:49.801920   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.801932   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:49.801943   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:49.802002   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:49.840347   72712 cri.go:89] found id: ""
	I0425 20:06:49.840376   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.840387   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:49.840395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:49.840458   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:49.898486   72712 cri.go:89] found id: ""
	I0425 20:06:49.898516   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.898527   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:49.898536   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:49.898547   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:49.952735   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:49.952775   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:49.967986   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:49.968018   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:50.048003   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:50.048024   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:50.048040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:50.126062   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:50.126098   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:49.373031   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:51.873671   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.917641   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.418642   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.421542   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.384273   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.384393   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:52.679721   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:52.695636   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:52.695700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:52.738329   72712 cri.go:89] found id: ""
	I0425 20:06:52.738359   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.738368   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:52.738374   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:52.738420   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:52.779388   72712 cri.go:89] found id: ""
	I0425 20:06:52.779418   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.779426   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:52.779433   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:52.779496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:52.821105   72712 cri.go:89] found id: ""
	I0425 20:06:52.821137   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.821149   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:52.821168   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:52.821231   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:52.861781   72712 cri.go:89] found id: ""
	I0425 20:06:52.861817   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.861825   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:52.861831   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:52.861885   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:52.904602   72712 cri.go:89] found id: ""
	I0425 20:06:52.904633   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.904644   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:52.904651   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:52.904712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:52.951137   72712 cri.go:89] found id: ""
	I0425 20:06:52.951174   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.951183   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:52.951188   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:52.951234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:52.994199   72712 cri.go:89] found id: ""
	I0425 20:06:52.994249   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.994257   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:52.994262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:52.994315   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:53.031997   72712 cri.go:89] found id: ""
	I0425 20:06:53.032020   72712 logs.go:276] 0 containers: []
	W0425 20:06:53.032027   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:53.032035   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:53.032046   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:53.111351   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:53.111383   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:53.162470   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:53.162504   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:53.217188   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:53.217223   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:53.233071   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:53.233100   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:53.308983   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:55.809162   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:55.825185   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:55.825259   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:55.865963   72712 cri.go:89] found id: ""
	I0425 20:06:55.865989   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.866001   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:55.866009   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:55.866060   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:55.920565   72712 cri.go:89] found id: ""
	I0425 20:06:55.920601   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.920612   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:55.920620   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:55.920677   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:55.962643   72712 cri.go:89] found id: ""
	I0425 20:06:55.962669   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.962677   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:55.962684   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:55.962738   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:56.000737   72712 cri.go:89] found id: ""
	I0425 20:06:56.000764   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.000773   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:56.000782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:56.000828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:56.042226   72712 cri.go:89] found id: ""
	I0425 20:06:56.042251   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.042259   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:56.042265   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:56.042316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:56.080765   72712 cri.go:89] found id: ""
	I0425 20:06:56.080788   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.080798   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:56.080810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:56.080869   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:56.119563   72712 cri.go:89] found id: ""
	I0425 20:06:56.119590   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.119602   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:56.119608   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:56.119667   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:56.160136   72712 cri.go:89] found id: ""
	I0425 20:06:56.160162   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.160170   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:56.160179   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:56.160193   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:56.213506   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:56.213539   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:56.232121   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:56.232150   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:56.336606   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:56.336629   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:56.336640   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:56.426867   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:56.426908   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:54.374441   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:56.374847   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.916077   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.916521   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.384779   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.884281   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:58.975395   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:58.991064   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:58.991125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:59.031157   72712 cri.go:89] found id: ""
	I0425 20:06:59.031179   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.031190   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:59.031197   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:59.031253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:59.071893   72712 cri.go:89] found id: ""
	I0425 20:06:59.071923   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.071931   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:59.071937   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:59.071998   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:59.114714   72712 cri.go:89] found id: ""
	I0425 20:06:59.114749   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.114760   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:59.114768   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:59.114840   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:59.159482   72712 cri.go:89] found id: ""
	I0425 20:06:59.159510   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.159518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:59.159523   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:59.159575   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:59.201218   72712 cri.go:89] found id: ""
	I0425 20:06:59.201245   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.201253   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:59.201263   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:59.201312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:59.247277   72712 cri.go:89] found id: ""
	I0425 20:06:59.247305   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.247316   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:59.247324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:59.247379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:59.286713   72712 cri.go:89] found id: ""
	I0425 20:06:59.286738   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.286746   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:59.286751   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:59.286804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:59.332263   72712 cri.go:89] found id: ""
	I0425 20:06:59.332296   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.332320   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:59.332332   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:59.332346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:59.416446   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:59.416477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:59.462125   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:59.462166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:59.514881   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:59.514907   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:59.530109   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:59.530134   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:59.605820   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.106478   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:02.124859   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:02.124934   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:02.180491   72712 cri.go:89] found id: ""
	I0425 20:07:02.180526   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.180537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:02.180545   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:02.180601   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:02.237075   72712 cri.go:89] found id: ""
	I0425 20:07:02.237104   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.237118   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:02.237126   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:02.237190   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:02.295104   72712 cri.go:89] found id: ""
	I0425 20:07:02.295129   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.295140   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:02.295148   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:02.295210   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:02.335392   72712 cri.go:89] found id: ""
	I0425 20:07:02.335418   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.335428   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:02.335435   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:02.335496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:58.871748   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.372545   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.373424   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.917135   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.917504   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.885744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:04.385280   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:02.376964   72712 cri.go:89] found id: ""
	I0425 20:07:02.376990   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.377002   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:02.377009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:02.377066   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:02.415460   72712 cri.go:89] found id: ""
	I0425 20:07:02.415484   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.415491   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:02.415496   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:02.415550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:02.461946   72712 cri.go:89] found id: ""
	I0425 20:07:02.461972   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.461993   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:02.462009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:02.462075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:02.502829   72712 cri.go:89] found id: ""
	I0425 20:07:02.502851   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.502858   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:02.502866   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:02.502878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:02.558264   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:02.558296   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:02.574175   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:02.574225   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:02.649363   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.649389   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:02.649404   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:02.730528   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:02.730560   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.276648   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:05.292055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:05.292121   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:05.332849   72712 cri.go:89] found id: ""
	I0425 20:07:05.332874   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.332884   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:05.332892   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:05.332954   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:05.376446   72712 cri.go:89] found id: ""
	I0425 20:07:05.376475   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.376487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:05.376494   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:05.376556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:05.418635   72712 cri.go:89] found id: ""
	I0425 20:07:05.418664   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.418675   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:05.418682   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:05.418745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:05.459082   72712 cri.go:89] found id: ""
	I0425 20:07:05.459113   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.459123   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:05.459128   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:05.459175   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:05.498473   72712 cri.go:89] found id: ""
	I0425 20:07:05.498502   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.498514   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:05.498521   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:05.498578   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:05.543121   72712 cri.go:89] found id: ""
	I0425 20:07:05.543150   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.543159   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:05.543164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:05.543211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:05.585722   72712 cri.go:89] found id: ""
	I0425 20:07:05.585748   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.585758   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:05.585766   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:05.585826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:05.629614   72712 cri.go:89] found id: ""
	I0425 20:07:05.629647   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.629661   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:05.629671   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:05.629685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:05.683974   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:05.684007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:05.700651   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:05.700685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:05.782097   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:05.782127   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:05.782142   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:05.863881   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:05.863918   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.374553   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:07.872114   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.417080   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.417436   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:10.418259   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.885509   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:09.383078   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.412898   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:08.428152   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:08.428206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:08.468403   72712 cri.go:89] found id: ""
	I0425 20:07:08.468441   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.468455   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:08.468464   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:08.468529   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:08.511246   72712 cri.go:89] found id: ""
	I0425 20:07:08.511285   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.511297   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:08.511304   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:08.511363   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:08.553121   72712 cri.go:89] found id: ""
	I0425 20:07:08.553148   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.553155   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:08.553161   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:08.553214   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:08.589723   72712 cri.go:89] found id: ""
	I0425 20:07:08.589745   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.589755   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:08.589762   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:08.589826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:08.629502   72712 cri.go:89] found id: ""
	I0425 20:07:08.629525   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.629533   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:08.629538   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:08.629591   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:08.677107   72712 cri.go:89] found id: ""
	I0425 20:07:08.677144   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.677153   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:08.677164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:08.677212   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:08.716501   72712 cri.go:89] found id: ""
	I0425 20:07:08.716531   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.716542   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:08.716550   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:08.716611   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:08.763473   72712 cri.go:89] found id: ""
	I0425 20:07:08.763503   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.763515   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:08.763526   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:08.763543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:08.848961   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:08.848985   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:08.849000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:08.945851   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:08.945890   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:08.989429   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:08.989460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:09.042721   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:09.042756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.559400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:11.575100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:11.575180   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:11.613246   72712 cri.go:89] found id: ""
	I0425 20:07:11.613271   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.613284   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:11.613290   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:11.613351   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:11.655158   72712 cri.go:89] found id: ""
	I0425 20:07:11.655189   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.655200   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:11.655208   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:11.655266   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:11.695122   72712 cri.go:89] found id: ""
	I0425 20:07:11.695144   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.695151   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:11.695156   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:11.695205   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:11.735578   72712 cri.go:89] found id: ""
	I0425 20:07:11.735604   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.735615   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:11.735621   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:11.735680   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:11.774750   72712 cri.go:89] found id: ""
	I0425 20:07:11.774785   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.774795   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:11.774803   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:11.774855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:11.814878   72712 cri.go:89] found id: ""
	I0425 20:07:11.814908   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.814920   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:11.814939   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:11.815000   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:11.853262   72712 cri.go:89] found id: ""
	I0425 20:07:11.853295   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.853306   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:11.853313   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:11.853379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:11.897291   72712 cri.go:89] found id: ""
	I0425 20:07:11.897314   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.897324   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:11.897333   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:11.897348   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:11.956913   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:11.956945   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.973787   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:11.973821   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:12.055801   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:12.055826   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:12.055842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:12.140238   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:12.140270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:10.372634   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.374037   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.418299   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.919967   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:11.383994   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:13.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:15.884319   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.685296   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:14.699655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:14.699740   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:14.741907   72712 cri.go:89] found id: ""
	I0425 20:07:14.741936   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.741947   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:14.741955   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:14.742017   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:14.786457   72712 cri.go:89] found id: ""
	I0425 20:07:14.786479   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.786487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:14.786493   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:14.786537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:14.825010   72712 cri.go:89] found id: ""
	I0425 20:07:14.825042   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.825055   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:14.825063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:14.825124   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:14.874834   72712 cri.go:89] found id: ""
	I0425 20:07:14.874856   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.874867   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:14.874875   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:14.874933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:14.914636   72712 cri.go:89] found id: ""
	I0425 20:07:14.914674   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.914685   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:14.914693   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:14.914752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:14.959327   72712 cri.go:89] found id: ""
	I0425 20:07:14.959356   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.959365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:14.959372   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:14.959425   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:15.000637   72712 cri.go:89] found id: ""
	I0425 20:07:15.000666   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.000674   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:15.000680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:15.000728   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:15.040497   72712 cri.go:89] found id: ""
	I0425 20:07:15.040523   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.040531   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:15.040539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:15.040550   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:15.120206   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:15.120240   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:15.168292   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:15.168324   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:15.222133   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:15.222164   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:15.237719   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:15.237746   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:15.323404   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:14.872743   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.375231   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.420149   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:19.420277   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:18.384902   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:20.883469   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.823552   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:17.838837   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:17.838911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:17.880547   72712 cri.go:89] found id: ""
	I0425 20:07:17.880584   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.880595   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:17.880608   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:17.880669   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:17.929700   72712 cri.go:89] found id: ""
	I0425 20:07:17.929730   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.929742   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:17.929797   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:17.929861   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:17.974057   72712 cri.go:89] found id: ""
	I0425 20:07:17.974081   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.974088   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:17.974094   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:17.974142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:18.013173   72712 cri.go:89] found id: ""
	I0425 20:07:18.013200   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.013209   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:18.013215   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:18.013267   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:18.053525   72712 cri.go:89] found id: ""
	I0425 20:07:18.053557   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.053568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:18.053580   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:18.053644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:18.095972   72712 cri.go:89] found id: ""
	I0425 20:07:18.096004   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.096016   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:18.096024   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:18.096089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:18.136792   72712 cri.go:89] found id: ""
	I0425 20:07:18.136823   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.136834   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:18.136842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:18.136904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:18.176562   72712 cri.go:89] found id: ""
	I0425 20:07:18.176594   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.176605   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:18.176619   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:18.176634   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:18.254402   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:18.254440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:18.298075   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:18.298112   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:18.356091   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:18.356124   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:18.373788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:18.373822   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:18.452545   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:20.952752   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:20.972054   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:20.972133   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:21.015572   72712 cri.go:89] found id: ""
	I0425 20:07:21.015602   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.015613   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:21.015621   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:21.015689   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:21.053313   72712 cri.go:89] found id: ""
	I0425 20:07:21.053342   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.053352   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:21.053359   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:21.053422   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:21.090343   72712 cri.go:89] found id: ""
	I0425 20:07:21.090373   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.090384   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:21.090391   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:21.090472   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:21.127148   72712 cri.go:89] found id: ""
	I0425 20:07:21.127174   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.127184   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:21.127192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:21.127258   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:21.167175   72712 cri.go:89] found id: ""
	I0425 20:07:21.167199   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.167207   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:21.167212   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:21.167263   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:21.212740   72712 cri.go:89] found id: ""
	I0425 20:07:21.212771   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.212783   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:21.212791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:21.212856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:21.250751   72712 cri.go:89] found id: ""
	I0425 20:07:21.250774   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.250782   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:21.250788   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:21.250833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:21.292387   72712 cri.go:89] found id: ""
	I0425 20:07:21.292414   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.292426   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:21.292436   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:21.292451   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:21.337695   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:21.337726   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:21.395479   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:21.395520   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:21.411538   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:21.411564   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:21.493248   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:21.493270   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:21.493282   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:19.873680   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.372461   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:21.421770   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:23.426808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.883520   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.884554   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.076755   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:24.093549   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:24.093624   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:24.135660   72712 cri.go:89] found id: ""
	I0425 20:07:24.135686   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.135694   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:24.135705   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:24.135784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:24.179778   72712 cri.go:89] found id: ""
	I0425 20:07:24.179799   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.179807   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:24.179824   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:24.179883   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.226745   72712 cri.go:89] found id: ""
	I0425 20:07:24.226771   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.226780   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:24.226785   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:24.226839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:24.273302   72712 cri.go:89] found id: ""
	I0425 20:07:24.273327   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.273347   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:24.273354   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:24.273421   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:24.314117   72712 cri.go:89] found id: ""
	I0425 20:07:24.314149   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.314160   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:24.314167   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:24.314247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:24.353144   72712 cri.go:89] found id: ""
	I0425 20:07:24.353173   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.353184   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:24.353192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:24.353292   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:24.395899   72712 cri.go:89] found id: ""
	I0425 20:07:24.395925   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.395933   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:24.395938   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:24.395988   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:24.444470   72712 cri.go:89] found id: ""
	I0425 20:07:24.444503   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.444514   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:24.444525   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:24.444540   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:24.499845   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:24.499876   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:24.517421   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:24.517449   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:24.596509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:24.596530   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:24.596543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:24.710844   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:24.710878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.259541   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:27.275551   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:27.275609   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:27.314610   72712 cri.go:89] found id: ""
	I0425 20:07:27.314640   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.314651   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:27.314656   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:27.314712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:27.350100   72712 cri.go:89] found id: ""
	I0425 20:07:27.350132   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.350151   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:27.350158   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:27.350226   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.373886   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:26.873863   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:25.917794   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:28.417757   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:30.419922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.384565   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:29.385043   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.390197   72712 cri.go:89] found id: ""
	I0425 20:07:27.390238   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.390249   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:27.390257   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:27.390312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:27.431936   72712 cri.go:89] found id: ""
	I0425 20:07:27.431961   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.431973   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:27.431980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:27.432038   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:27.469175   72712 cri.go:89] found id: ""
	I0425 20:07:27.469204   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.469212   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:27.469218   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:27.469276   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:27.509385   72712 cri.go:89] found id: ""
	I0425 20:07:27.509416   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.509428   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:27.509436   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:27.509503   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:27.548997   72712 cri.go:89] found id: ""
	I0425 20:07:27.549034   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.549045   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:27.549052   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:27.549111   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:27.588925   72712 cri.go:89] found id: ""
	I0425 20:07:27.588959   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.588973   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:27.588985   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:27.589000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.635005   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:27.635040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:27.686587   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:27.686617   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:27.702913   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:27.702942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:27.775525   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:27.775551   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:27.775562   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.352358   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:30.367016   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:30.367088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:30.410878   72712 cri.go:89] found id: ""
	I0425 20:07:30.410906   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.410917   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:30.410927   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:30.410985   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:30.456150   72712 cri.go:89] found id: ""
	I0425 20:07:30.456173   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.456181   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:30.456186   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:30.456234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:30.495409   72712 cri.go:89] found id: ""
	I0425 20:07:30.495439   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.495450   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:30.495458   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:30.495516   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:30.535863   72712 cri.go:89] found id: ""
	I0425 20:07:30.535895   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.535906   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:30.535912   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:30.535971   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:30.573772   72712 cri.go:89] found id: ""
	I0425 20:07:30.573808   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.573819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:30.573826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:30.573892   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:30.626310   72712 cri.go:89] found id: ""
	I0425 20:07:30.626350   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.626362   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:30.626376   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:30.626438   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:30.666302   72712 cri.go:89] found id: ""
	I0425 20:07:30.666332   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.666343   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:30.666350   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:30.666413   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:30.703478   72712 cri.go:89] found id: ""
	I0425 20:07:30.703507   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.703519   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:30.703529   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:30.703543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:30.756532   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:30.756566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:30.772128   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:30.772158   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:30.853701   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:30.853728   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:30.853743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.935879   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:30.935917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:29.372219   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.872125   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:32.865998   72220 pod_ready.go:81] duration metric: took 4m0.000690329s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:32.866038   72220 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0425 20:07:32.866057   72220 pod_ready.go:38] duration metric: took 4m13.047288103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:32.866091   72220 kubeadm.go:591] duration metric: took 4m22.882679222s to restartPrimaryControlPlane
	W0425 20:07:32.866150   72220 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:32.866182   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:32.917319   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:35.421922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.886418   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.894776   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.483702   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:33.498238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:33.498310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:33.545696   72712 cri.go:89] found id: ""
	I0425 20:07:33.545723   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.545731   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:33.545737   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:33.545791   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:33.590808   72712 cri.go:89] found id: ""
	I0425 20:07:33.590837   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.590849   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:33.590857   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:33.590919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:33.634529   72712 cri.go:89] found id: ""
	I0425 20:07:33.634554   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.634562   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:33.634572   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:33.634640   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:33.679055   72712 cri.go:89] found id: ""
	I0425 20:07:33.679082   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.679093   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:33.679100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:33.679160   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:33.720653   72712 cri.go:89] found id: ""
	I0425 20:07:33.720686   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.720698   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:33.720706   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:33.720777   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:33.766163   72712 cri.go:89] found id: ""
	I0425 20:07:33.766221   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.766233   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:33.766241   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:33.766314   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:33.810804   72712 cri.go:89] found id: ""
	I0425 20:07:33.810830   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.810839   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:33.810844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:33.810908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:33.858109   72712 cri.go:89] found id: ""
	I0425 20:07:33.858140   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.858152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:33.858162   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:33.858176   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:33.926296   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:33.926333   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:33.944220   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:33.944249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:34.042119   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:34.042191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:34.042234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:34.143694   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:34.143732   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:36.691575   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:36.710408   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:36.710490   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:36.760097   72712 cri.go:89] found id: ""
	I0425 20:07:36.760135   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.760144   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:36.760150   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:36.760208   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:36.801508   72712 cri.go:89] found id: ""
	I0425 20:07:36.801532   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.801541   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:36.801546   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:36.801602   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:36.842293   72712 cri.go:89] found id: ""
	I0425 20:07:36.842328   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.842340   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:36.842355   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:36.842418   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:36.884101   72712 cri.go:89] found id: ""
	I0425 20:07:36.884131   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.884141   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:36.884149   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:36.884211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:36.925007   72712 cri.go:89] found id: ""
	I0425 20:07:36.925032   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.925039   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:36.925045   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:36.925109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:36.964975   72712 cri.go:89] found id: ""
	I0425 20:07:36.965009   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.965020   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:36.965028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:36.965088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:37.030956   72712 cri.go:89] found id: ""
	I0425 20:07:37.030987   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.030999   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:37.031007   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:37.031080   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:37.105919   72712 cri.go:89] found id: ""
	I0425 20:07:37.105946   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.105956   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:37.105967   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:37.105983   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:37.196376   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:37.196415   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:37.240296   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:37.240334   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:37.304336   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:37.304371   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:37.323146   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:37.323184   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:37.918245   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.418671   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:36.384384   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:38.387656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.883973   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	W0425 20:07:37.414563   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:39.915087   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:39.930987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:39.931068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:39.967641   72712 cri.go:89] found id: ""
	I0425 20:07:39.967682   72712 logs.go:276] 0 containers: []
	W0425 20:07:39.967693   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:39.967698   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:39.967755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:40.009924   72712 cri.go:89] found id: ""
	I0425 20:07:40.009951   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.009959   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:40.009969   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:40.010019   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:40.049644   72712 cri.go:89] found id: ""
	I0425 20:07:40.049675   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.049689   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:40.049697   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:40.049759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:40.090487   72712 cri.go:89] found id: ""
	I0425 20:07:40.090509   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.090519   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:40.090524   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:40.090583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:40.137634   72712 cri.go:89] found id: ""
	I0425 20:07:40.137664   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.137674   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:40.137681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:40.137745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:40.174832   72712 cri.go:89] found id: ""
	I0425 20:07:40.174863   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.174874   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:40.174882   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:40.174947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:40.212559   72712 cri.go:89] found id: ""
	I0425 20:07:40.212585   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.212593   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:40.212598   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:40.212687   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:40.253459   72712 cri.go:89] found id: ""
	I0425 20:07:40.253494   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.253506   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:40.253518   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:40.253533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:40.311253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:40.311288   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:40.326693   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:40.326722   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:40.405792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:40.405816   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:40.405831   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:40.486712   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:40.486749   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:42.419025   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:44.916387   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:41.387375   72304 pod_ready.go:81] duration metric: took 4m0.010411263s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:41.387396   72304 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:07:41.387402   72304 pod_ready.go:38] duration metric: took 4m6.083068398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:41.387414   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:07:41.387441   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:41.387498   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:41.459873   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:41.459899   72304 cri.go:89] found id: ""
	I0425 20:07:41.459907   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:41.459960   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.465470   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:41.465534   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:41.509504   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:41.509523   72304 cri.go:89] found id: ""
	I0425 20:07:41.509530   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:41.509584   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.515012   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:41.515070   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:41.562701   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:41.562727   72304 cri.go:89] found id: ""
	I0425 20:07:41.562737   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:41.562792   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.567856   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:41.567928   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:41.618411   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:41.618441   72304 cri.go:89] found id: ""
	I0425 20:07:41.618452   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:41.618510   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.625757   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:41.625826   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:41.672707   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:41.672734   72304 cri.go:89] found id: ""
	I0425 20:07:41.672741   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:41.672785   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.678040   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:41.678119   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:41.725172   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:41.725196   72304 cri.go:89] found id: ""
	I0425 20:07:41.725205   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:41.725264   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.730651   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:41.730718   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:41.777224   72304 cri.go:89] found id: ""
	I0425 20:07:41.777269   72304 logs.go:276] 0 containers: []
	W0425 20:07:41.777280   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:41.777290   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:41.777380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:41.821498   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:41.821524   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:41.821531   72304 cri.go:89] found id: ""
	I0425 20:07:41.821541   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:41.821599   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.827065   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.831900   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:41.831924   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:41.893198   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:41.893233   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:41.909141   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:41.909169   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:42.051260   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:42.051305   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:42.109173   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:42.109214   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:42.155862   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:42.155894   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:42.222430   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:42.222466   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:42.265323   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:42.265353   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:42.316534   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:42.316569   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:42.363543   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:42.363568   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:42.422389   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:42.422421   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:42.471230   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:42.471259   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.011223   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.011263   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:45.578411   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:45.597748   72304 api_server.go:72] duration metric: took 4m16.066757074s to wait for apiserver process to appear ...
	I0425 20:07:45.597777   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:07:45.597813   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:45.597861   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:45.649452   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:45.649481   72304 cri.go:89] found id: ""
	I0425 20:07:45.649491   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:45.649534   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.654965   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:45.655023   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:45.701151   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:45.701177   72304 cri.go:89] found id: ""
	I0425 20:07:45.701186   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:45.701238   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.706702   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:45.706767   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:45.763142   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:45.763167   72304 cri.go:89] found id: ""
	I0425 20:07:45.763177   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:45.763220   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.768626   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:45.768684   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:45.816615   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:45.816648   72304 cri.go:89] found id: ""
	I0425 20:07:45.816656   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:45.816701   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.822714   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:45.822790   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:45.875652   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:45.875678   72304 cri.go:89] found id: ""
	I0425 20:07:45.875688   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:45.875737   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.881649   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:45.881719   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:45.930631   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:45.930656   72304 cri.go:89] found id: ""
	I0425 20:07:45.930666   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:45.930721   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.939712   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:45.939783   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:45.984646   72304 cri.go:89] found id: ""
	I0425 20:07:45.984684   72304 logs.go:276] 0 containers: []
	W0425 20:07:45.984693   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:45.984699   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:45.984754   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:46.029752   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.029777   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.029782   72304 cri.go:89] found id: ""
	I0425 20:07:46.029789   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:46.029845   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.035189   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.040479   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:46.040503   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:46.101469   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:46.101509   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:46.167362   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:46.167401   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:46.217732   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:46.217759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:46.264372   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:46.264404   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:43.037730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:43.064471   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:43.064550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:43.130075   72712 cri.go:89] found id: ""
	I0425 20:07:43.130111   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.130129   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:43.130136   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:43.130195   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:43.169628   72712 cri.go:89] found id: ""
	I0425 20:07:43.169663   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.169675   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:43.169682   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:43.169748   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:43.214845   72712 cri.go:89] found id: ""
	I0425 20:07:43.214869   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.214877   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:43.214883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:43.214929   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:43.263047   72712 cri.go:89] found id: ""
	I0425 20:07:43.263069   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.263078   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:43.263083   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:43.263142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:43.313179   72712 cri.go:89] found id: ""
	I0425 20:07:43.313213   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.313223   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:43.313231   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:43.313295   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:43.353440   72712 cri.go:89] found id: ""
	I0425 20:07:43.353468   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.353480   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:43.353488   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:43.353546   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:43.392261   72712 cri.go:89] found id: ""
	I0425 20:07:43.392288   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.392296   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:43.392321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:43.392378   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:43.431111   72712 cri.go:89] found id: ""
	I0425 20:07:43.431139   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.431147   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:43.431155   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:43.431165   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:43.485087   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:43.485120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:43.501508   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:43.501536   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:43.586041   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:43.586073   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:43.586089   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.663194   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.663232   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:46.218461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:46.233195   72712 kubeadm.go:591] duration metric: took 4m4.06065248s to restartPrimaryControlPlane
	W0425 20:07:46.233281   72712 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:46.233311   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:48.166680   72712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.933342568s)
	I0425 20:07:48.166771   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:48.185391   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:07:48.198250   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:07:48.209825   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:07:48.209843   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:07:48.209897   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:07:48.220854   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:07:48.220909   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:07:48.231518   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:07:48.241515   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:07:48.241589   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:07:48.251764   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.261762   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:07:48.261813   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.271952   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:07:48.281914   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:07:48.281986   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:07:48.292879   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:07:48.372322   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:07:48.372460   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:07:48.529730   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:07:48.529854   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:07:48.529979   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:07:48.753171   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:07:48.755473   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:07:48.755590   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:07:48.755692   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:07:48.755809   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:07:48.755905   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:07:48.756132   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:07:48.756317   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:07:48.756867   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:07:48.757498   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:07:48.758073   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:07:48.758581   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:07:48.758745   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:07:48.758842   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:07:48.894873   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:07:48.946907   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:07:49.084938   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:07:49.201925   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:07:49.219675   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:07:49.220891   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:07:49.220951   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:07:49.387310   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:07:46.917886   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:48.919793   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:46.324627   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:46.324653   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:46.382068   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:46.382102   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.424672   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:46.424709   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.466659   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:46.466692   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:46.484868   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:46.484898   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:46.614688   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:46.614720   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:46.666805   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:46.666846   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:47.098854   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:47.098899   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:49.653042   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:07:49.657843   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:07:49.659251   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:07:49.659285   72304 api_server.go:131] duration metric: took 4.061499319s to wait for apiserver health ...
	I0425 20:07:49.659295   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:07:49.659321   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:49.659380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:49.709699   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:49.709721   72304 cri.go:89] found id: ""
	I0425 20:07:49.709729   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:49.709795   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.715369   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:49.715429   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:49.773517   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:49.773544   72304 cri.go:89] found id: ""
	I0425 20:07:49.773554   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:49.773617   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.778984   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:49.779071   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:49.825707   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:49.825739   72304 cri.go:89] found id: ""
	I0425 20:07:49.825746   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:49.825790   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.830613   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:49.830678   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:49.872068   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:49.872094   72304 cri.go:89] found id: ""
	I0425 20:07:49.872104   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:49.872166   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.877311   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:49.877383   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:49.930182   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:49.930216   72304 cri.go:89] found id: ""
	I0425 20:07:49.930228   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:49.930283   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.935415   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:49.935484   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:49.985377   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:49.985404   72304 cri.go:89] found id: ""
	I0425 20:07:49.985412   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:49.985469   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.991021   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:49.991092   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:50.037755   72304 cri.go:89] found id: ""
	I0425 20:07:50.037787   72304 logs.go:276] 0 containers: []
	W0425 20:07:50.037802   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:50.037811   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:50.037875   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:50.083706   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.083731   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.083735   72304 cri.go:89] found id: ""
	I0425 20:07:50.083742   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:50.083793   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.088730   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.094339   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:50.094371   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:50.161538   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:50.161573   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.204178   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:50.204211   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.251315   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:50.251344   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:50.315859   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:50.315886   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:50.367787   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:50.367829   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:50.429509   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:50.429541   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:50.488723   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:50.488759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:50.506838   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:50.506879   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:50.629496   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:50.629526   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:50.689286   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:50.689321   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:50.731343   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:50.731373   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:50.772085   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:50.772114   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:49.389887   72712 out.go:204]   - Booting up control plane ...
	I0425 20:07:49.390011   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:07:49.395060   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:07:49.398108   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:07:49.398220   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:07:49.402596   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:07:53.651817   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:07:53.651845   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.651850   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.651854   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.651859   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.651862   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.651865   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.651872   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.651878   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.651885   72304 system_pods.go:74] duration metric: took 3.992584481s to wait for pod list to return data ...
	I0425 20:07:53.651892   72304 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:07:53.654617   72304 default_sa.go:45] found service account: "default"
	I0425 20:07:53.654641   72304 default_sa.go:55] duration metric: took 2.742232ms for default service account to be created ...
	I0425 20:07:53.654649   72304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:07:53.660082   72304 system_pods.go:86] 8 kube-system pods found
	I0425 20:07:53.660110   72304 system_pods.go:89] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.660116   72304 system_pods.go:89] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.660121   72304 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.660127   72304 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.660131   72304 system_pods.go:89] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.660135   72304 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.660142   72304 system_pods.go:89] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.660148   72304 system_pods.go:89] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.660154   72304 system_pods.go:126] duration metric: took 5.50043ms to wait for k8s-apps to be running ...
	I0425 20:07:53.660161   72304 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:07:53.660201   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:53.677461   72304 system_svc.go:56] duration metric: took 17.289854ms WaitForService to wait for kubelet
	I0425 20:07:53.677499   72304 kubeadm.go:576] duration metric: took 4m24.146512306s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:07:53.677524   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:07:53.681527   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:07:53.681562   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:07:53.681576   72304 node_conditions.go:105] duration metric: took 4.045221ms to run NodePressure ...
	I0425 20:07:53.681591   72304 start.go:240] waiting for startup goroutines ...
	I0425 20:07:53.681605   72304 start.go:245] waiting for cluster config update ...
	I0425 20:07:53.681622   72304 start.go:254] writing updated cluster config ...
	I0425 20:07:53.682002   72304 ssh_runner.go:195] Run: rm -f paused
	I0425 20:07:53.732056   72304 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:07:53.734302   72304 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142196" cluster and "default" namespace by default
	I0425 20:07:51.419808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:53.916090   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:55.917139   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:58.417609   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:00.917152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:02.918628   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.419508   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.765908   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.899694836s)
	I0425 20:08:05.765989   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:05.787711   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:08:05.801717   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:08:05.813710   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:08:05.813741   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:08:05.813802   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:08:05.825122   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:08:05.825202   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:08:05.837118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:08:05.848807   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:08:05.848880   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:08:05.862028   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.873795   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:08:05.873919   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.885577   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:08:05.897605   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:08:05.897685   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:08:05.909284   72220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:08:05.965574   72220 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 20:08:05.965663   72220 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:08:06.133359   72220 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:08:06.133525   72220 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:08:06.133675   72220 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:08:06.391437   72220 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:08:06.393805   72220 out.go:204]   - Generating certificates and keys ...
	I0425 20:08:06.393905   72220 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:08:06.393994   72220 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:08:06.394121   72220 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:08:06.394237   72220 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:08:06.394332   72220 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:08:06.394417   72220 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:08:06.394514   72220 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:08:06.396093   72220 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:08:06.396202   72220 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:08:06.396300   72220 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:08:06.396358   72220 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:08:06.396423   72220 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:08:06.683452   72220 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:08:06.778456   72220 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 20:08:06.923709   72220 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:08:07.079685   72220 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:08:07.170533   72220 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:08:07.171070   72220 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:08:07.173798   72220 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:08:07.175699   72220 out.go:204]   - Booting up control plane ...
	I0425 20:08:07.175824   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:08:07.175924   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:08:07.176060   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:08:07.197685   72220 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:08:07.200579   72220 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:08:07.200645   72220 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:08:07.354665   72220 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 20:08:07.354779   72220 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 20:08:07.855900   72220 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.56346ms
	I0425 20:08:07.856015   72220 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 20:08:07.423114   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:09.425115   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:13.358654   72220 kubeadm.go:309] [api-check] The API server is healthy after 5.502458238s
	I0425 20:08:13.388381   72220 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 20:08:13.908867   72220 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 20:08:13.945417   72220 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 20:08:13.945708   72220 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-744552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 20:08:13.959901   72220 kubeadm.go:309] [bootstrap-token] Using token: r2mxoe.iuelddsr8gvoq1wo
	I0425 20:08:13.961409   72220 out.go:204]   - Configuring RBAC rules ...
	I0425 20:08:13.961552   72220 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 20:08:13.970435   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 20:08:13.978933   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 20:08:13.982503   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 20:08:13.987029   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 20:08:13.990969   72220 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 20:08:14.103051   72220 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 20:08:14.554715   72220 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 20:08:15.105951   72220 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 20:08:15.107134   72220 kubeadm.go:309] 
	I0425 20:08:15.107222   72220 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 20:08:15.107236   72220 kubeadm.go:309] 
	I0425 20:08:15.107336   72220 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 20:08:15.107349   72220 kubeadm.go:309] 
	I0425 20:08:15.107379   72220 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 20:08:15.107463   72220 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 20:08:15.107550   72220 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 20:08:15.107560   72220 kubeadm.go:309] 
	I0425 20:08:15.107657   72220 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 20:08:15.107668   72220 kubeadm.go:309] 
	I0425 20:08:15.107735   72220 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 20:08:15.107747   72220 kubeadm.go:309] 
	I0425 20:08:15.107807   72220 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 20:08:15.107935   72220 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 20:08:15.108030   72220 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 20:08:15.108042   72220 kubeadm.go:309] 
	I0425 20:08:15.108154   72220 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 20:08:15.108269   72220 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 20:08:15.108280   72220 kubeadm.go:309] 
	I0425 20:08:15.108395   72220 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.108556   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed \
	I0425 20:08:15.108594   72220 kubeadm.go:309] 	--control-plane 
	I0425 20:08:15.108603   72220 kubeadm.go:309] 
	I0425 20:08:15.108719   72220 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 20:08:15.108730   72220 kubeadm.go:309] 
	I0425 20:08:15.108849   72220 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.109004   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed 
	I0425 20:08:15.109717   72220 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:08:15.109778   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:08:15.109797   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:08:15.111712   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:08:11.918414   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:14.420753   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:15.113288   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:08:15.129693   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:08:15.157631   72220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:08:15.157709   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.157760   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-744552 minikube.k8s.io/updated_at=2024_04_25T20_08_15_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=no-preload-744552 minikube.k8s.io/primary=true
	I0425 20:08:15.374198   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.418592   72220 ops.go:34] apiserver oom_adj: -16
	I0425 20:08:15.874721   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.374969   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.875091   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.375038   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.874685   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:18.374802   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.917617   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:19.421721   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:18.874931   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.374961   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.874349   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.374787   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.875130   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.374959   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.874325   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.374798   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.875034   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:23.374899   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.917898   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:22.917132   71966 pod_ready.go:81] duration metric: took 4m0.007062693s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:08:22.917156   71966 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:08:22.917164   71966 pod_ready.go:38] duration metric: took 4m4.548150095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:22.917179   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:22.917211   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:22.917270   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:22.982604   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:22.982631   71966 cri.go:89] found id: ""
	I0425 20:08:22.982640   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:22.982698   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:22.988558   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:22.988618   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:23.031937   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.031964   71966 cri.go:89] found id: ""
	I0425 20:08:23.031973   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:23.032031   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.037315   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:23.037371   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:23.089839   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.089862   71966 cri.go:89] found id: ""
	I0425 20:08:23.089872   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:23.089936   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.095247   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:23.095309   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:23.136257   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.136286   71966 cri.go:89] found id: ""
	I0425 20:08:23.136294   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:23.136357   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.142548   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:23.142608   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:23.186190   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.186229   71966 cri.go:89] found id: ""
	I0425 20:08:23.186239   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:23.186301   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.191422   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:23.191494   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:23.242326   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.242361   71966 cri.go:89] found id: ""
	I0425 20:08:23.242371   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:23.242437   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.248578   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:23.248642   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:23.286781   71966 cri.go:89] found id: ""
	I0425 20:08:23.286807   71966 logs.go:276] 0 containers: []
	W0425 20:08:23.286817   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:23.286823   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:23.286885   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:23.334728   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:23.334754   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.334761   71966 cri.go:89] found id: ""
	I0425 20:08:23.334770   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:23.334831   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.340288   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.344787   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:23.344808   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:23.401830   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:23.401865   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:23.425683   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:23.425715   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:23.568527   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:23.568558   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.608747   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:23.608776   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.647962   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:23.647996   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.687270   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:23.687308   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:23.745081   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:23.745112   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:23.799375   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:23.799405   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.853199   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:23.853232   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.896535   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:23.896571   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.964317   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:23.964350   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:24.013196   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:24.013231   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:23.874275   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.374250   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.874396   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.374767   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.874968   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.374333   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.874916   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.374369   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.499044   72220 kubeadm.go:1107] duration metric: took 12.341393953s to wait for elevateKubeSystemPrivileges
	W0425 20:08:27.499078   72220 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 20:08:27.499087   72220 kubeadm.go:393] duration metric: took 5m17.572541498s to StartCluster
	I0425 20:08:27.499108   72220 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.499189   72220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:08:27.500940   72220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.501192   72220 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:08:27.503257   72220 out.go:177] * Verifying Kubernetes components...
	I0425 20:08:27.501308   72220 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:08:27.501405   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:08:27.505389   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:08:27.505403   72220 addons.go:69] Setting storage-provisioner=true in profile "no-preload-744552"
	I0425 20:08:27.505438   72220 addons.go:234] Setting addon storage-provisioner=true in "no-preload-744552"
	W0425 20:08:27.505453   72220 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:08:27.505490   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505505   72220 addons.go:69] Setting metrics-server=true in profile "no-preload-744552"
	I0425 20:08:27.505535   72220 addons.go:234] Setting addon metrics-server=true in "no-preload-744552"
	W0425 20:08:27.505546   72220 addons.go:243] addon metrics-server should already be in state true
	I0425 20:08:27.505574   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505895   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.505922   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.505492   72220 addons.go:69] Setting default-storageclass=true in profile "no-preload-744552"
	I0425 20:08:27.505990   72220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-744552"
	I0425 20:08:27.505952   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506099   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.506418   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506467   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.523666   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0425 20:08:27.526950   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0425 20:08:27.526972   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.526981   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I0425 20:08:27.527536   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527606   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527662   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.527683   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528039   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528059   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528122   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528228   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528242   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528601   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528644   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528712   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.528735   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.528800   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.529228   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.529246   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.532151   72220 addons.go:234] Setting addon default-storageclass=true in "no-preload-744552"
	W0425 20:08:27.532171   72220 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:08:27.532204   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.532543   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.532582   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.547165   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0425 20:08:27.547700   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.548354   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.548368   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.548675   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.548793   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.550640   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.554301   72220 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:08:27.553061   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0425 20:08:27.553099   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0425 20:08:27.555613   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:08:27.555630   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:08:27.555652   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.556177   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556181   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556724   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556739   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.556868   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556879   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.557128   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.557700   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.557729   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.558142   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.558406   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.559420   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.559990   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.560057   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.560076   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.560177   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.560333   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.560549   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.560967   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.562839   72220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:08:27.564442   72220 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.564480   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:08:27.564517   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.567912   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.568153   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.568171   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.570321   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.570514   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.570709   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.570945   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.578396   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
	I0425 20:08:27.586629   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.587070   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.587082   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.587584   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.587736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.589708   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.589937   72220 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.589948   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:08:27.589961   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.592640   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.592983   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.593007   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.593261   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.593541   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.593736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.593906   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.783858   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:08:27.820917   72220 node_ready.go:35] waiting up to 6m0s for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832349   72220 node_ready.go:49] node "no-preload-744552" has status "Ready":"True"
	I0425 20:08:27.832377   72220 node_ready.go:38] duration metric: took 11.423909ms for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832390   72220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:27.844475   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:27.886461   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:08:27.886483   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:08:27.899413   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.931511   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.935073   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:08:27.935098   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:08:27.989052   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:27.989082   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:08:28.016326   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:28.551863   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551894   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.551964   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551976   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552255   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552280   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552292   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552315   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552358   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.552397   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552405   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552414   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552421   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552571   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552597   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552710   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552736   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.578416   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.578445   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.578730   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.578776   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.578789   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.945831   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.945861   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946170   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946191   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946214   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.946224   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946531   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946549   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946560   72220 addons.go:470] Verifying addon metrics-server=true in "no-preload-744552"
	I0425 20:08:28.946570   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.948485   72220 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:08:27.005360   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:27.024856   71966 api_server.go:72] duration metric: took 4m14.401244231s to wait for apiserver process to appear ...
	I0425 20:08:27.024881   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:27.024922   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:27.024982   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:27.072098   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:27.072129   71966 cri.go:89] found id: ""
	I0425 20:08:27.072140   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:27.072210   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.077726   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:27.077793   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:27.118834   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:27.118855   71966 cri.go:89] found id: ""
	I0425 20:08:27.118864   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:27.118917   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.125277   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:27.125347   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:27.167036   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.167064   71966 cri.go:89] found id: ""
	I0425 20:08:27.167074   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:27.167131   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.172390   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:27.172468   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:27.212933   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:27.212957   71966 cri.go:89] found id: ""
	I0425 20:08:27.212967   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:27.213022   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.218033   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:27.218083   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:27.259294   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:27.259321   71966 cri.go:89] found id: ""
	I0425 20:08:27.259331   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:27.259384   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.265537   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:27.265610   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:27.312145   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:27.312174   71966 cri.go:89] found id: ""
	I0425 20:08:27.312183   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:27.312240   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.318346   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:27.318405   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:27.362467   71966 cri.go:89] found id: ""
	I0425 20:08:27.362495   71966 logs.go:276] 0 containers: []
	W0425 20:08:27.362504   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:27.362509   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:27.362569   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:27.406810   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:27.406834   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.406839   71966 cri.go:89] found id: ""
	I0425 20:08:27.406846   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:27.406903   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.412431   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.421695   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:27.421725   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.472832   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:27.472863   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.535799   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:27.535830   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:28.004964   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:28.005006   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:28.072378   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:28.072417   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:28.236479   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:28.236523   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:28.296095   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:28.296133   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:28.351290   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:28.351314   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:28.400529   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:28.400567   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:28.459149   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:28.459178   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:28.507818   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:28.507844   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:28.565596   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:28.565627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:28.588509   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:28.588535   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:29.403321   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:08:29.403717   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:29.404001   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:28.950127   72220 addons.go:505] duration metric: took 1.448816058s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:08:29.862142   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:30.851653   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.851677   72220 pod_ready.go:81] duration metric: took 3.007171918s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.851689   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857090   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.857108   72220 pod_ready.go:81] duration metric: took 5.412841ms for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857117   72220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863315   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.863331   72220 pod_ready.go:81] duration metric: took 6.207835ms for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863339   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867557   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.867579   72220 pod_ready.go:81] duration metric: took 4.23311ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867590   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872391   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.872407   72220 pod_ready.go:81] duration metric: took 4.810397ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872415   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249226   72220 pod_ready.go:92] pod "kube-proxy-22w7x" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.249259   72220 pod_ready.go:81] duration metric: took 376.837327ms for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249284   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649908   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.649934   72220 pod_ready.go:81] duration metric: took 400.641991ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649945   72220 pod_ready.go:38] duration metric: took 3.817541056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:31.649962   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:31.650025   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:31.684094   72220 api_server.go:72] duration metric: took 4.182865357s to wait for apiserver process to appear ...
	I0425 20:08:31.684123   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:31.684146   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:08:31.689688   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:08:31.690939   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.690963   72220 api_server.go:131] duration metric: took 6.831773ms to wait for apiserver health ...
	I0425 20:08:31.690973   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.853816   72220 system_pods.go:59] 9 kube-system pods found
	I0425 20:08:31.853849   72220 system_pods.go:61] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:31.853856   72220 system_pods.go:61] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:31.853861   72220 system_pods.go:61] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:31.853868   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:31.853872   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:31.853877   72220 system_pods.go:61] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:31.853881   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:31.853889   72220 system_pods.go:61] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:31.853894   72220 system_pods.go:61] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:31.853907   72220 system_pods.go:74] duration metric: took 162.928561ms to wait for pod list to return data ...
	I0425 20:08:31.853916   72220 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:32.049906   72220 default_sa.go:45] found service account: "default"
	I0425 20:08:32.049932   72220 default_sa.go:55] duration metric: took 196.003422ms for default service account to be created ...
	I0425 20:08:32.049942   72220 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:32.255245   72220 system_pods.go:86] 9 kube-system pods found
	I0425 20:08:32.255290   72220 system_pods.go:89] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:32.255298   72220 system_pods.go:89] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:32.255304   72220 system_pods.go:89] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:32.255311   72220 system_pods.go:89] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:32.255317   72220 system_pods.go:89] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:32.255322   72220 system_pods.go:89] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:32.255328   72220 system_pods.go:89] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:32.255338   72220 system_pods.go:89] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:32.255348   72220 system_pods.go:89] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:32.255368   72220 system_pods.go:126] duration metric: took 205.41905ms to wait for k8s-apps to be running ...
	I0425 20:08:32.255378   72220 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:32.255429   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:32.274141   72220 system_svc.go:56] duration metric: took 18.75721ms WaitForService to wait for kubelet
	I0425 20:08:32.274173   72220 kubeadm.go:576] duration metric: took 4.77294686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:32.274198   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:32.449699   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:32.449727   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:32.449741   72220 node_conditions.go:105] duration metric: took 175.536406ms to run NodePressure ...
	I0425 20:08:32.449755   72220 start.go:240] waiting for startup goroutines ...
	I0425 20:08:32.449765   72220 start.go:245] waiting for cluster config update ...
	I0425 20:08:32.449778   72220 start.go:254] writing updated cluster config ...
	I0425 20:08:32.450108   72220 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:32.503317   72220 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:32.505391   72220 out.go:177] * Done! kubectl is now configured to use "no-preload-744552" cluster and "default" namespace by default
	I0425 20:08:31.153636   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:08:31.158526   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:08:31.159775   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.159817   71966 api_server.go:131] duration metric: took 4.134911832s to wait for apiserver health ...
	I0425 20:08:31.159827   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.159847   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:31.159890   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:31.201597   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:31.201616   71966 cri.go:89] found id: ""
	I0425 20:08:31.201625   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:31.201667   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.206973   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:31.207039   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:31.248400   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:31.248424   71966 cri.go:89] found id: ""
	I0425 20:08:31.248435   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:31.248496   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.253822   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:31.253879   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:31.298921   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:31.298946   71966 cri.go:89] found id: ""
	I0425 20:08:31.298956   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:31.299003   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.304691   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:31.304758   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:31.351773   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:31.351796   71966 cri.go:89] found id: ""
	I0425 20:08:31.351804   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:31.351851   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.356599   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:31.356651   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:31.399655   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:31.399678   71966 cri.go:89] found id: ""
	I0425 20:08:31.399686   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:31.399740   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.405103   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:31.405154   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:31.452763   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:31.452785   71966 cri.go:89] found id: ""
	I0425 20:08:31.452794   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:31.452840   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.457788   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:31.457838   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:31.503746   71966 cri.go:89] found id: ""
	I0425 20:08:31.503780   71966 logs.go:276] 0 containers: []
	W0425 20:08:31.503791   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:31.503798   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:31.503868   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:31.548517   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:31.548543   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:31.548555   71966 cri.go:89] found id: ""
	I0425 20:08:31.548565   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:31.548631   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.553673   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.558271   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:31.558290   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:31.974349   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:31.974387   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:32.033292   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:32.033327   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:32.050762   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:32.050791   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:32.101591   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:32.101627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:32.142626   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:32.142652   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:32.203270   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:32.203315   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:32.247021   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:32.247048   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:32.294900   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:32.294936   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:32.353902   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:32.353934   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:32.488543   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:32.488584   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:32.569303   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:32.569358   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:32.622767   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:32.622802   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:35.181779   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:08:35.181813   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.181820   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.181826   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.181832   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.181837   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.181843   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.181851   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.181858   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.181867   71966 system_pods.go:74] duration metric: took 4.022033823s to wait for pod list to return data ...
	I0425 20:08:35.181879   71966 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:35.185387   71966 default_sa.go:45] found service account: "default"
	I0425 20:08:35.185413   71966 default_sa.go:55] duration metric: took 3.523751ms for default service account to be created ...
	I0425 20:08:35.185423   71966 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:35.195075   71966 system_pods.go:86] 8 kube-system pods found
	I0425 20:08:35.195099   71966 system_pods.go:89] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.195104   71966 system_pods.go:89] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.195109   71966 system_pods.go:89] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.195114   71966 system_pods.go:89] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.195118   71966 system_pods.go:89] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.195122   71966 system_pods.go:89] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.195128   71966 system_pods.go:89] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.195133   71966 system_pods.go:89] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.195139   71966 system_pods.go:126] duration metric: took 9.711803ms to wait for k8s-apps to be running ...
	I0425 20:08:35.195155   71966 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:35.195195   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:35.213494   71966 system_svc.go:56] duration metric: took 18.331225ms WaitForService to wait for kubelet
	I0425 20:08:35.213523   71966 kubeadm.go:576] duration metric: took 4m22.589912913s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:35.213545   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:35.216461   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:35.216481   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:35.216493   71966 node_conditions.go:105] duration metric: took 2.94061ms to run NodePressure ...
	I0425 20:08:35.216502   71966 start.go:240] waiting for startup goroutines ...
	I0425 20:08:35.216509   71966 start.go:245] waiting for cluster config update ...
	I0425 20:08:35.216518   71966 start.go:254] writing updated cluster config ...
	I0425 20:08:35.216750   71966 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:35.265836   71966 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:35.269026   71966 out.go:177] * Done! kubectl is now configured to use "embed-certs-512173" cluster and "default" namespace by default
	I0425 20:08:34.404410   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:34.404662   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:44.405293   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:44.405518   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:04.406406   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:04.406676   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.407969   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:44.408240   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.408259   72712 kubeadm.go:309] 
	I0425 20:09:44.408293   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:09:44.408355   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:09:44.408373   72712 kubeadm.go:309] 
	I0425 20:09:44.408417   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:09:44.408448   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:09:44.408562   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:09:44.408575   72712 kubeadm.go:309] 
	I0425 20:09:44.408655   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:09:44.408684   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:09:44.408711   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:09:44.408718   72712 kubeadm.go:309] 
	I0425 20:09:44.408812   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:09:44.408912   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:09:44.408939   72712 kubeadm.go:309] 
	I0425 20:09:44.409085   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:09:44.409217   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:09:44.409341   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:09:44.409418   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:09:44.409433   72712 kubeadm.go:309] 
	I0425 20:09:44.410319   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:09:44.410423   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:09:44.410510   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0425 20:09:44.410640   72712 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0425 20:09:44.410700   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:09:45.395830   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:09:45.412628   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:09:45.423387   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:09:45.423412   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:09:45.423465   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:09:45.434317   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:09:45.434389   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:09:45.445657   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:09:45.455698   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:09:45.455772   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:09:45.466137   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.476140   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:09:45.476192   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.486410   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:09:45.495465   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:09:45.495522   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:09:45.505410   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:09:45.726416   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:11:42.214574   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:11:42.214715   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0425 20:11:42.216323   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:11:42.216393   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:11:42.216507   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:11:42.216650   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:11:42.216795   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:11:42.216882   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:11:42.218766   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:11:42.218847   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:11:42.218923   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:11:42.219042   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:11:42.219103   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:11:42.219167   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:11:42.219237   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:11:42.219321   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:11:42.219407   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:11:42.219519   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:11:42.219639   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:11:42.219694   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:11:42.219742   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:11:42.219786   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:11:42.219831   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:11:42.219883   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:11:42.219929   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:11:42.220029   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:11:42.220139   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:11:42.220204   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:11:42.220308   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:11:42.222891   72712 out.go:204]   - Booting up control plane ...
	I0425 20:11:42.222979   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:11:42.223054   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:11:42.223129   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:11:42.223222   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:11:42.223404   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:11:42.223459   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:11:42.223565   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.223835   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.223937   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224165   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224243   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224457   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224541   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224799   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224902   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.225125   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.225134   72712 kubeadm.go:309] 
	I0425 20:11:42.225166   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:11:42.225204   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:11:42.225210   72712 kubeadm.go:309] 
	I0425 20:11:42.225239   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:11:42.225267   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:11:42.225352   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:11:42.225358   72712 kubeadm.go:309] 
	I0425 20:11:42.225446   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:11:42.225476   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:11:42.225522   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:11:42.225533   72712 kubeadm.go:309] 
	I0425 20:11:42.225626   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:11:42.225714   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:11:42.225729   72712 kubeadm.go:309] 
	I0425 20:11:42.225875   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:11:42.225951   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:11:42.226022   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:11:42.226096   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:11:42.226129   72712 kubeadm.go:309] 
	I0425 20:11:42.226162   72712 kubeadm.go:393] duration metric: took 8m0.122692927s to StartCluster
	I0425 20:11:42.226242   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:11:42.226299   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:11:42.283295   72712 cri.go:89] found id: ""
	I0425 20:11:42.283320   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.283329   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:11:42.283335   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:11:42.283389   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:11:42.322462   72712 cri.go:89] found id: ""
	I0425 20:11:42.322493   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.322505   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:11:42.322512   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:11:42.322574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:11:42.372329   72712 cri.go:89] found id: ""
	I0425 20:11:42.372355   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.372363   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:11:42.372369   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:11:42.372416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:11:42.420348   72712 cri.go:89] found id: ""
	I0425 20:11:42.420374   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.420382   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:11:42.420389   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:11:42.420447   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:11:42.460274   72712 cri.go:89] found id: ""
	I0425 20:11:42.460317   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.460329   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:11:42.460337   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:11:42.460395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:11:42.503828   72712 cri.go:89] found id: ""
	I0425 20:11:42.503855   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.503867   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:11:42.503874   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:11:42.503933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:11:42.545045   72712 cri.go:89] found id: ""
	I0425 20:11:42.545070   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.545086   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:11:42.545095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:11:42.545156   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:11:42.586389   72712 cri.go:89] found id: ""
	I0425 20:11:42.586413   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.586421   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:11:42.586429   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:11:42.586440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:11:42.602835   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:11:42.602863   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:11:42.695131   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:11:42.695153   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:11:42.695168   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:11:42.819889   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:11:42.819922   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:11:42.869446   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:11:42.869474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0425 20:11:42.927184   72712 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0425 20:11:42.927236   72712 out.go:239] * 
	W0425 20:11:42.927291   72712 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.927311   72712 out.go:239] * 
	W0425 20:11:42.928275   72712 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 20:11:42.931353   72712 out.go:177] 
	W0425 20:11:42.932654   72712 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.932696   72712 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0425 20:11:42.932713   72712 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0425 20:11:42.934227   72712 out.go:177] 
	
	
	==> CRI-O <==
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.893498141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d21de37-4890-409c-9399-1c2696e5620a name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.893693782Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075436968615154,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854776f370afd769520f7dd7fd2cd6f4088109b63b5404544585784fc25663c6,PodSandboxId:15ef1946510c86cd77304767a5a673cedf3b91ba715619788f50870b8dcfe5f5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075416854493311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa3cc9ba-0ade-4039-a7f9-377e809f2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 312d4fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1,PodSandboxId:09f62e29b3db9ba7ec770035e57fe6b766e952b43dc7219ebc5d8017b3f997c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075413892954632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z6ls5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef8d9f5-f623-4632-bb88-7e5c60220725,},Annotations:map[string]string{io.kubernetes.container.hash: 174bdd8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075406121225952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c,PodSandboxId:e4f5f5571a966a63e599fd628cfb69001dad1712ec1f5b5c9515012f278b7eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075406068847960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqmtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ef58b-09d4-4e88-925b-b5a
3afc68361,},Annotations:map[string]string{io.kubernetes.container.hash: 6a43d313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075,PodSandboxId:fce641181064f56cf7e95bc6d921842f082527ee6627528ec58fb8c5730ae6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075401473770392,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaac9ac173dc156b9690dc6b
e7f1916,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3,PodSandboxId:308c50030e231f0fe3ffeb1d2c8c4abc82e51179ffba4bacfd95dcee6f8ed331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075401469413711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c614667a3a1301a9dcae27075736d426,},Annotations:map[string
]string{io.kubernetes.container.hash: 19e66a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa,PodSandboxId:33759899f143a39023c021fbf27602a0ad2454a572816760590c9a4add2b1ef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075401490467231,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18075c0328297e29839df100d21ef24,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 5af9b73b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4,PodSandboxId:39ac71ee0f08bd5c9c4c81c9f1b9699c9eb750ca1624e1e92df3b584e71394f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075401423696001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5097b936fa2847d92518c82e5376e274,}
,Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d21de37-4890-409c-9399-1c2696e5620a name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.934332909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4a6b891-56dc-492b-9695-db1ceefd7471 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.934404259Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4a6b891-56dc-492b-9695-db1ceefd7471 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.936005410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8472482-ca64-47a0-87cb-39de9d3922cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.936626111Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076215936599303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8472482-ca64-47a0-87cb-39de9d3922cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.937566432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29934093-f5e4-4caf-8c3e-e51e5f866b98 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.937645147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29934093-f5e4-4caf-8c3e-e51e5f866b98 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.938347876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075436968615154,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854776f370afd769520f7dd7fd2cd6f4088109b63b5404544585784fc25663c6,PodSandboxId:15ef1946510c86cd77304767a5a673cedf3b91ba715619788f50870b8dcfe5f5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075416854493311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa3cc9ba-0ade-4039-a7f9-377e809f2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 312d4fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1,PodSandboxId:09f62e29b3db9ba7ec770035e57fe6b766e952b43dc7219ebc5d8017b3f997c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075413892954632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z6ls5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef8d9f5-f623-4632-bb88-7e5c60220725,},Annotations:map[string]string{io.kubernetes.container.hash: 174bdd8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075406121225952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c,PodSandboxId:e4f5f5571a966a63e599fd628cfb69001dad1712ec1f5b5c9515012f278b7eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075406068847960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqmtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ef58b-09d4-4e88-925b-b5a
3afc68361,},Annotations:map[string]string{io.kubernetes.container.hash: 6a43d313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075,PodSandboxId:fce641181064f56cf7e95bc6d921842f082527ee6627528ec58fb8c5730ae6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075401473770392,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaac9ac173dc156b9690dc6b
e7f1916,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3,PodSandboxId:308c50030e231f0fe3ffeb1d2c8c4abc82e51179ffba4bacfd95dcee6f8ed331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075401469413711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c614667a3a1301a9dcae27075736d426,},Annotations:map[string
]string{io.kubernetes.container.hash: 19e66a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa,PodSandboxId:33759899f143a39023c021fbf27602a0ad2454a572816760590c9a4add2b1ef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075401490467231,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18075c0328297e29839df100d21ef24,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 5af9b73b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4,PodSandboxId:39ac71ee0f08bd5c9c4c81c9f1b9699c9eb750ca1624e1e92df3b584e71394f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075401423696001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5097b936fa2847d92518c82e5376e274,}
,Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29934093-f5e4-4caf-8c3e-e51e5f866b98 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.974986859Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=79640be7-aba4-4f1e-bd9a-47efc7eabf69 name=/runtime.v1.RuntimeService/Status
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.975070119Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=79640be7-aba4-4f1e-bd9a-47efc7eabf69 name=/runtime.v1.RuntimeService/Status
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.986790571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b474d2a-fbb7-4db0-99b3-4d1ae8bb1e70 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.987350250Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b474d2a-fbb7-4db0-99b3-4d1ae8bb1e70 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.988611159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7733ab1f-dc68-40d1-be88-c807c5bdeb42 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.989004092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076215988982719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7733ab1f-dc68-40d1-be88-c807c5bdeb42 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.989567372Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abfdb298-f14e-41b3-ab9d-1b259b08d699 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.989646181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abfdb298-f14e-41b3-ab9d-1b259b08d699 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:16:55 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:55.989917140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075436968615154,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854776f370afd769520f7dd7fd2cd6f4088109b63b5404544585784fc25663c6,PodSandboxId:15ef1946510c86cd77304767a5a673cedf3b91ba715619788f50870b8dcfe5f5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075416854493311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa3cc9ba-0ade-4039-a7f9-377e809f2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 312d4fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1,PodSandboxId:09f62e29b3db9ba7ec770035e57fe6b766e952b43dc7219ebc5d8017b3f997c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075413892954632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z6ls5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef8d9f5-f623-4632-bb88-7e5c60220725,},Annotations:map[string]string{io.kubernetes.container.hash: 174bdd8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075406121225952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c,PodSandboxId:e4f5f5571a966a63e599fd628cfb69001dad1712ec1f5b5c9515012f278b7eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075406068847960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqmtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ef58b-09d4-4e88-925b-b5a
3afc68361,},Annotations:map[string]string{io.kubernetes.container.hash: 6a43d313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075,PodSandboxId:fce641181064f56cf7e95bc6d921842f082527ee6627528ec58fb8c5730ae6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075401473770392,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaac9ac173dc156b9690dc6b
e7f1916,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3,PodSandboxId:308c50030e231f0fe3ffeb1d2c8c4abc82e51179ffba4bacfd95dcee6f8ed331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075401469413711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c614667a3a1301a9dcae27075736d426,},Annotations:map[string
]string{io.kubernetes.container.hash: 19e66a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa,PodSandboxId:33759899f143a39023c021fbf27602a0ad2454a572816760590c9a4add2b1ef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075401490467231,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18075c0328297e29839df100d21ef24,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 5af9b73b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4,PodSandboxId:39ac71ee0f08bd5c9c4c81c9f1b9699c9eb750ca1624e1e92df3b584e71394f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075401423696001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5097b936fa2847d92518c82e5376e274,}
,Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abfdb298-f14e-41b3-ab9d-1b259b08d699 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:16:56 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:56.030230015Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd5dcf56-bc50-48ed-9ead-29b6e92b3594 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:16:56 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:56.030308229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd5dcf56-bc50-48ed-9ead-29b6e92b3594 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:16:56 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:56.031536686Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4625de98-5993-4f97-a1d5-8463f9fc0319 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:16:56 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:56.031985886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076216031960305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4625de98-5993-4f97-a1d5-8463f9fc0319 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:16:56 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:56.032669132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=770c1542-5f3c-48ba-bedb-6a72a9cdd0ff name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:16:56 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:56.032722832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=770c1542-5f3c-48ba-bedb-6a72a9cdd0ff name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:16:56 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:16:56.033234915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075436968615154,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854776f370afd769520f7dd7fd2cd6f4088109b63b5404544585784fc25663c6,PodSandboxId:15ef1946510c86cd77304767a5a673cedf3b91ba715619788f50870b8dcfe5f5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075416854493311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa3cc9ba-0ade-4039-a7f9-377e809f2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 312d4fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1,PodSandboxId:09f62e29b3db9ba7ec770035e57fe6b766e952b43dc7219ebc5d8017b3f997c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075413892954632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z6ls5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef8d9f5-f623-4632-bb88-7e5c60220725,},Annotations:map[string]string{io.kubernetes.container.hash: 174bdd8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075406121225952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c,PodSandboxId:e4f5f5571a966a63e599fd628cfb69001dad1712ec1f5b5c9515012f278b7eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075406068847960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqmtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ef58b-09d4-4e88-925b-b5a
3afc68361,},Annotations:map[string]string{io.kubernetes.container.hash: 6a43d313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075,PodSandboxId:fce641181064f56cf7e95bc6d921842f082527ee6627528ec58fb8c5730ae6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075401473770392,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaac9ac173dc156b9690dc6b
e7f1916,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3,PodSandboxId:308c50030e231f0fe3ffeb1d2c8c4abc82e51179ffba4bacfd95dcee6f8ed331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075401469413711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c614667a3a1301a9dcae27075736d426,},Annotations:map[string
]string{io.kubernetes.container.hash: 19e66a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa,PodSandboxId:33759899f143a39023c021fbf27602a0ad2454a572816760590c9a4add2b1ef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075401490467231,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18075c0328297e29839df100d21ef24,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 5af9b73b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4,PodSandboxId:39ac71ee0f08bd5c9c4c81c9f1b9699c9eb750ca1624e1e92df3b584e71394f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075401423696001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5097b936fa2847d92518c82e5376e274,}
,Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=770c1542-5f3c-48ba-bedb-6a72a9cdd0ff name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7aef2f269df51       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   66467b045e867       storage-provisioner
	854776f370afd       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   15ef1946510c8       busybox
	2370c81d0f1fb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   09f62e29b3db9       coredns-7db6d8ff4d-z6ls5
	c1088dde2fde0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   66467b045e867       storage-provisioner
	bb19806d4c42c       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago      Running             kube-proxy                1                   e4f5f5571a966       kube-proxy-bqmtp
	7c6a6c0bef83a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      13 minutes ago      Running             kube-apiserver            1                   33759899f143a       kube-apiserver-default-k8s-diff-port-142196
	a553ccfa98465       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      13 minutes ago      Running             kube-scheduler            1                   fce641181064f       kube-scheduler-default-k8s-diff-port-142196
	430ba8aceb30f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   308c50030e231       etcd-default-k8s-diff-port-142196
	ae2f5c52c77d7       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      13 minutes ago      Running             kube-controller-manager   1                   39ac71ee0f08b       kube-controller-manager-default-k8s-diff-port-142196
	
	
	==> coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35943 - 62266 "HINFO IN 7043630354879609154.1372615921858047967. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017474524s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-142196
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-142196
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=default-k8s-diff-port-142196
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T19_55_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:55:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-142196
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 20:16:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 20:14:08 +0000   Thu, 25 Apr 2024 19:55:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 20:14:08 +0000   Thu, 25 Apr 2024 19:55:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 20:14:08 +0000   Thu, 25 Apr 2024 19:55:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 20:14:08 +0000   Thu, 25 Apr 2024 20:03:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    default-k8s-diff-port-142196
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ad1f8fba81d4105a156fc610cbd8b0b
	  System UUID:                6ad1f8fb-a81d-4105-a156-fc610cbd8b0b
	  Boot ID:                    6256b908-1be9-403b-b416-d8693fb50908
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-z6ls5                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-142196                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-142196             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-142196    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-bqmtp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-142196             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-cphk6                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-142196 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-142196 event: Registered Node default-k8s-diff-port-142196 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-142196 event: Registered Node default-k8s-diff-port-142196 in Controller
	
	
	==> dmesg <==
	[Apr25 20:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052982] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043827] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.686276] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Apr25 20:03] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.612499] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.105808] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.059450] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072278] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.229550] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.148093] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.353224] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +5.485161] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.080522] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.308265] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +5.598090] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.632799] systemd-fstab-generator[1557]: Ignoring "noauto" option for root device
	[  +2.142993] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.141446] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] <==
	{"level":"info","ts":"2024-04-25T20:03:40.951383Z","caller":"traceutil/trace.go:171","msg":"trace[1452196359] transaction","detail":"{read_only:false; response_revision:562; number_of_response:1; }","duration":"587.794954ms","start":"2024-04-25T20:03:40.363573Z","end":"2024-04-25T20:03:40.951368Z","steps":["trace[1452196359] 'process raft request'  (duration: 133.282181ms)","trace[1452196359] 'compare'  (duration: 453.506335ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-25T20:03:40.951715Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:03:40.363556Z","time spent":"588.132463ms","remote":"127.0.0.1:36414","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:558 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2024-04-25T20:03:40.951824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"584.482686ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-142196\" ","response":"range_response_count:1 size:5536"}
	{"level":"info","ts":"2024-04-25T20:03:40.951878Z","caller":"traceutil/trace.go:171","msg":"trace[819596987] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-142196; range_end:; response_count:1; response_revision:564; }","duration":"584.561905ms","start":"2024-04-25T20:03:40.367307Z","end":"2024-04-25T20:03:40.951868Z","steps":["trace[819596987] 'agreement among raft nodes before linearized reading'  (duration: 584.340674ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T20:03:40.951928Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:03:40.367296Z","time spent":"584.624761ms","remote":"127.0.0.1:36140","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5559,"request content":"key:\"/registry/minions/default-k8s-diff-port-142196\" "}
	{"level":"warn","ts":"2024-04-25T20:03:40.951915Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.537471ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-25T20:03:40.95366Z","caller":"traceutil/trace.go:171","msg":"trace[1015537593] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:564; }","duration":"212.275732ms","start":"2024-04-25T20:03:40.741372Z","end":"2024-04-25T20:03:40.953647Z","steps":["trace[1015537593] 'agreement among raft nodes before linearized reading'  (duration: 210.529192ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T20:03:41.099721Z","caller":"traceutil/trace.go:171","msg":"trace[1210898283] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"125.206764ms","start":"2024-04-25T20:03:40.974501Z","end":"2024-04-25T20:03:41.099707Z","steps":["trace[1210898283] 'process raft request'  (duration: 124.505489ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T20:04:02.20059Z","caller":"traceutil/trace.go:171","msg":"trace[932700225] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"461.728259ms","start":"2024-04-25T20:04:01.73884Z","end":"2024-04-25T20:04:02.200569Z","steps":["trace[932700225] 'process raft request'  (duration: 461.472322ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T20:04:02.200786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:04:01.738826Z","time spent":"461.873827ms","remote":"127.0.0.1:36054","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":833,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a4098558\" mod_revision:537 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a4098558\" value_size:738 lease:6421727447003126453 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a4098558\" > >"}
	{"level":"warn","ts":"2024-04-25T20:04:02.545033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.69523ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6421727447003126827 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" mod_revision:570 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" value_size:4212 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-25T20:04:02.545181Z","caller":"traceutil/trace.go:171","msg":"trace[1603849765] linearizableReadLoop","detail":"{readStateIndex:627; appliedIndex:626; }","duration":"678.629338ms","start":"2024-04-25T20:04:01.866538Z","end":"2024-04-25T20:04:02.545168Z","steps":["trace[1603849765] 'read index received'  (duration: 334.53717ms)","trace[1603849765] 'applied index is now lower than readState.Index'  (duration: 344.091257ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-25T20:04:02.54524Z","caller":"traceutil/trace.go:171","msg":"trace[222074805] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"804.130096ms","start":"2024-04-25T20:04:01.741104Z","end":"2024-04-25T20:04:02.545234Z","steps":["trace[222074805] 'process raft request'  (duration: 686.094133ms)","trace[222074805] 'compare'  (duration: 117.612416ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-25T20:04:02.545289Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:04:01.741089Z","time spent":"804.16781ms","remote":"127.0.0.1:36142","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4278,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" mod_revision:570 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" value_size:4212 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" > >"}
	{"level":"warn","ts":"2024-04-25T20:04:02.545403Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"678.863389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" ","response":"range_response_count:1 size:4293"}
	{"level":"info","ts":"2024-04-25T20:04:02.545446Z","caller":"traceutil/trace.go:171","msg":"trace[1463483784] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-cphk6; range_end:; response_count:1; response_revision:584; }","duration":"678.924592ms","start":"2024-04-25T20:04:01.866515Z","end":"2024-04-25T20:04:02.54544Z","steps":["trace[1463483784] 'agreement among raft nodes before linearized reading'  (duration: 678.864971ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T20:04:02.545469Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:04:01.866502Z","time spent":"678.961891ms","remote":"127.0.0.1:36142","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4316,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" "}
	{"level":"warn","ts":"2024-04-25T20:04:02.545592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.28737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a409cdbd\" ","response":"range_response_count:1 size:804"}
	{"level":"info","ts":"2024-04-25T20:04:02.545651Z","caller":"traceutil/trace.go:171","msg":"trace[453362744] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a409cdbd; range_end:; response_count:1; response_revision:584; }","duration":"341.344309ms","start":"2024-04-25T20:04:02.204297Z","end":"2024-04-25T20:04:02.545642Z","steps":["trace[453362744] 'agreement among raft nodes before linearized reading'  (duration: 341.22708ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T20:04:02.545678Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:04:02.204253Z","time spent":"341.418146ms","remote":"127.0.0.1:36054","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":827,"request content":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a409cdbd\" "}
	{"level":"warn","ts":"2024-04-25T20:04:02.545861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.206725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-25T20:04:02.54593Z","caller":"traceutil/trace.go:171","msg":"trace[1874879402] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:584; }","duration":"265.294811ms","start":"2024-04-25T20:04:02.280626Z","end":"2024-04-25T20:04:02.54592Z","steps":["trace[1874879402] 'agreement among raft nodes before linearized reading'  (duration: 265.212487ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T20:13:22.661386Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":809}
	{"level":"info","ts":"2024-04-25T20:13:22.671708Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":809,"took":"9.667298ms","hash":2325524828,"current-db-size-bytes":2568192,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2568192,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-04-25T20:13:22.67183Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2325524828,"revision":809,"compact-revision":-1}
	
	
	==> kernel <==
	 20:16:56 up 14 min,  0 users,  load average: 0.13, 0.16, 0.10
	Linux default-k8s-diff-port-142196 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] <==
	I0425 20:11:25.519700       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:13:24.522704       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:13:24.522853       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0425 20:13:25.523800       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:13:25.523919       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:13:25.523962       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:13:25.523868       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:13:25.524055       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:13:25.525075       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:14:25.524647       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:14:25.525022       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:14:25.525062       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:14:25.525199       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:14:25.525292       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:14:25.526189       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:16:25.525498       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:16:25.525613       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:16:25.525624       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:16:25.527293       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:16:25.527378       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:16:25.527386       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] <==
	I0425 20:11:10.205855       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:11:39.687054       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:11:40.214354       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:12:09.693480       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:12:10.223259       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:12:39.698886       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:12:40.231845       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:13:09.703947       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:13:10.240494       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:13:39.708945       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:13:40.249110       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:14:09.715238       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:14:10.257640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0425 20:14:34.752727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="293.762µs"
	E0425 20:14:39.721467       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:14:40.266299       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0425 20:14:48.754292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="62.675µs"
	E0425 20:15:09.726833       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:15:10.274856       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:15:39.736244       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:15:40.287564       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:16:09.743256       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:16:10.295483       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:16:39.749448       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:16:40.304187       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] <==
	I0425 20:03:26.253815       1 server_linux.go:69] "Using iptables proxy"
	I0425 20:03:26.263390       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	I0425 20:03:26.311409       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 20:03:26.311539       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 20:03:26.311594       1 server_linux.go:165] "Using iptables Proxier"
	I0425 20:03:26.314655       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 20:03:26.314900       1 server.go:872] "Version info" version="v1.30.0"
	I0425 20:03:26.314960       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 20:03:26.316000       1 config.go:192] "Starting service config controller"
	I0425 20:03:26.316050       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 20:03:26.316082       1 config.go:101] "Starting endpoint slice config controller"
	I0425 20:03:26.316098       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 20:03:26.318289       1 config.go:319] "Starting node config controller"
	I0425 20:03:26.318335       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 20:03:26.417074       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 20:03:26.417271       1 shared_informer.go:320] Caches are synced for service config
	I0425 20:03:26.418760       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] <==
	I0425 20:03:22.784588       1 serving.go:380] Generated self-signed cert in-memory
	W0425 20:03:24.484484       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0425 20:03:24.484606       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 20:03:24.484618       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0425 20:03:24.484625       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0425 20:03:24.517612       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0425 20:03:24.517662       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 20:03:24.519752       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0425 20:03:24.520106       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0425 20:03:24.520228       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0425 20:03:24.520327       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0425 20:03:24.620667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 25 20:14:23 default-k8s-diff-port-142196 kubelet[948]: E0425 20:14:23.746992     948 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 25 20:14:23 default-k8s-diff-port-142196 kubelet[948]: E0425 20:14:23.747454     948 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 25 20:14:23 default-k8s-diff-port-142196 kubelet[948]: E0425 20:14:23.748544     948 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwhmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-cphk6_kube-system(e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 25 20:14:23 default-k8s-diff-port-142196 kubelet[948]: E0425 20:14:23.749280     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:14:34 default-k8s-diff-port-142196 kubelet[948]: E0425 20:14:34.733956     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:14:48 default-k8s-diff-port-142196 kubelet[948]: E0425 20:14:48.737079     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:15:03 default-k8s-diff-port-142196 kubelet[948]: E0425 20:15:03.732973     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:15:14 default-k8s-diff-port-142196 kubelet[948]: E0425 20:15:14.733875     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:15:20 default-k8s-diff-port-142196 kubelet[948]: E0425 20:15:20.768504     948 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:15:20 default-k8s-diff-port-142196 kubelet[948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:15:20 default-k8s-diff-port-142196 kubelet[948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:15:20 default-k8s-diff-port-142196 kubelet[948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:15:20 default-k8s-diff-port-142196 kubelet[948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:15:26 default-k8s-diff-port-142196 kubelet[948]: E0425 20:15:26.733592     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:15:39 default-k8s-diff-port-142196 kubelet[948]: E0425 20:15:39.733561     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:15:52 default-k8s-diff-port-142196 kubelet[948]: E0425 20:15:52.732945     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:16:04 default-k8s-diff-port-142196 kubelet[948]: E0425 20:16:04.734541     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:16:18 default-k8s-diff-port-142196 kubelet[948]: E0425 20:16:18.735871     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:16:20 default-k8s-diff-port-142196 kubelet[948]: E0425 20:16:20.753558     948 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:16:20 default-k8s-diff-port-142196 kubelet[948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:16:20 default-k8s-diff-port-142196 kubelet[948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:16:20 default-k8s-diff-port-142196 kubelet[948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:16:20 default-k8s-diff-port-142196 kubelet[948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:16:32 default-k8s-diff-port-142196 kubelet[948]: E0425 20:16:32.733189     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:16:44 default-k8s-diff-port-142196 kubelet[948]: E0425 20:16:44.733503     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	
	
	==> storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] <==
	I0425 20:03:57.108325       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0425 20:03:57.125297       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0425 20:03:57.125485       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0425 20:04:14.529603       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0425 20:04:14.533779       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"747fb1a6-d4a5-403e-811e-03c0478dbf31", APIVersion:"v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-142196_ff01e226-22c4-4e06-bfa5-18a0b24e1309 became leader
	I0425 20:04:14.534051       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-142196_ff01e226-22c4-4e06-bfa5-18a0b24e1309!
	I0425 20:04:14.636952       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-142196_ff01e226-22c4-4e06-bfa5-18a0b24e1309!
	
	
	==> storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] <==
	I0425 20:03:26.239180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0425 20:03:56.240952       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-142196 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-cphk6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-142196 describe pod metrics-server-569cc877fc-cphk6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-142196 describe pod metrics-server-569cc877fc-cphk6: exit status 1 (61.165661ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-cphk6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-142196 describe pod metrics-server-569cc877fc-cphk6: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-744552 -n no-preload-744552
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-25 20:17:33.112368452 +0000 UTC m=+6379.521303004
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744552 -n no-preload-744552
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-744552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-744552 logs -n 25: (2.337372942s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-120641 sudo cat                             | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo find                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo crio                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-120641                                      | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113000 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:54 UTC |
	|         | disable-driver-mounts-113000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-512173            | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-744552             | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142196  | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210442        | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-512173                 | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-744552                  | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142196       | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:07 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210442             | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:59:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:59:17.353932   72712 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:59:17.354045   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354055   72712 out.go:304] Setting ErrFile to fd 2...
	I0425 19:59:17.354059   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354269   72712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:59:17.354795   72712 out.go:298] Setting JSON to false
	I0425 19:59:17.355681   72712 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6103,"bootTime":1714069054,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:59:17.355740   72712 start.go:139] virtualization: kvm guest
	I0425 19:59:17.357921   72712 out.go:177] * [old-k8s-version-210442] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:59:17.359325   72712 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:59:17.360640   72712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:59:17.359305   72712 notify.go:220] Checking for updates...
	I0425 19:59:17.361801   72712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:59:17.363086   72712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:59:17.364512   72712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:59:17.365842   72712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:59:17.367508   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 19:59:17.367909   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.367946   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.382995   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0425 19:59:17.383362   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.383991   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.384016   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.384378   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.384566   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.386317   72712 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0425 19:59:17.387599   72712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:59:17.387904   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.387948   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.402999   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0425 19:59:17.403506   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.403962   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.403986   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.404318   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.404472   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.438308   72712 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:59:17.439686   72712 start.go:297] selected driver: kvm2
	I0425 19:59:17.439716   72712 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.439831   72712 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:59:17.440486   72712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.440553   72712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:59:17.454719   72712 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:59:17.455114   72712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:59:17.455184   72712 cni.go:84] Creating CNI manager for ""
	I0425 19:59:17.455203   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:59:17.455266   72712 start.go:340] cluster config:
	{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.455393   72712 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.457210   72712 out.go:177] * Starting "old-k8s-version-210442" primary control-plane node in "old-k8s-version-210442" cluster
	I0425 19:59:18.474583   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:17.458384   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 19:59:17.458418   72712 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:59:17.458430   72712 cache.go:56] Caching tarball of preloaded images
	I0425 19:59:17.458517   72712 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:59:17.458529   72712 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0425 19:59:17.458638   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 19:59:17.458844   72712 start.go:360] acquireMachinesLock for old-k8s-version-210442: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:59:24.554517   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:27.626446   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:33.706451   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:36.778527   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:42.858471   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:45.930403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:52.010482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:55.082403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:01.162466   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:04.234537   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:10.314506   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:13.386463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:19.466523   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:22.538461   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:28.622423   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:31.690489   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:37.770534   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:40.842458   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:46.922463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:49.994524   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:56.074478   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:59.146487   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:05.226452   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:08.298480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:14.378455   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:17.450469   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:23.530513   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:26.602470   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:32.682497   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:35.754500   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:41.834480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:44.906482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:50.986468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:54.058502   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:00.138459   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:03.210554   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:09.290491   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:12.362472   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:18.442476   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:21.514468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.599158   72220 start.go:364] duration metric: took 4m21.632012686s to acquireMachinesLock for "no-preload-744552"
	I0425 20:02:30.599206   72220 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:30.599212   72220 fix.go:54] fixHost starting: 
	I0425 20:02:30.599516   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:30.599545   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:30.614130   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0425 20:02:30.614502   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:30.614962   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:02:30.614979   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:30.615306   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:30.615513   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:30.615640   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:02:30.617129   72220 fix.go:112] recreateIfNeeded on no-preload-744552: state=Stopped err=<nil>
	I0425 20:02:30.617150   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	W0425 20:02:30.617300   72220 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:30.619253   72220 out.go:177] * Restarting existing kvm2 VM for "no-preload-744552" ...
	I0425 20:02:27.594454   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.596600   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:30.596654   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.596986   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:02:30.597016   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.597206   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:02:30.599042   71966 machine.go:97] duration metric: took 4m44.620242563s to provisionDockerMachine
	I0425 20:02:30.599079   71966 fix.go:56] duration metric: took 4m44.639860566s for fixHost
	I0425 20:02:30.599085   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 4m44.639890108s
	W0425 20:02:30.599104   71966 start.go:713] error starting host: provision: host is not running
	W0425 20:02:30.599182   71966 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0425 20:02:30.599192   71966 start.go:728] Will try again in 5 seconds ...
	I0425 20:02:30.620801   72220 main.go:141] libmachine: (no-preload-744552) Calling .Start
	I0425 20:02:30.620978   72220 main.go:141] libmachine: (no-preload-744552) Ensuring networks are active...
	I0425 20:02:30.621640   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network default is active
	I0425 20:02:30.621965   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network mk-no-preload-744552 is active
	I0425 20:02:30.622317   72220 main.go:141] libmachine: (no-preload-744552) Getting domain xml...
	I0425 20:02:30.623010   72220 main.go:141] libmachine: (no-preload-744552) Creating domain...
	I0425 20:02:31.809967   72220 main.go:141] libmachine: (no-preload-744552) Waiting to get IP...
	I0425 20:02:31.810856   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:31.811353   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:31.811403   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:31.811308   73381 retry.go:31] will retry after 294.641704ms: waiting for machine to come up
	I0425 20:02:32.107955   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.108508   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.108542   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.108449   73381 retry.go:31] will retry after 373.307428ms: waiting for machine to come up
	I0425 20:02:32.483111   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.483590   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.483619   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.483546   73381 retry.go:31] will retry after 484.455862ms: waiting for machine to come up
	I0425 20:02:32.969188   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.969657   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.969694   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.969602   73381 retry.go:31] will retry after 382.359725ms: waiting for machine to come up
	I0425 20:02:33.353143   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.353598   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.353621   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.353550   73381 retry.go:31] will retry after 515.389674ms: waiting for machine to come up
	I0425 20:02:35.602273   71966 start.go:360] acquireMachinesLock for embed-certs-512173: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 20:02:33.870172   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.870652   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.870676   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.870603   73381 retry.go:31] will retry after 714.032032ms: waiting for machine to come up
	I0425 20:02:34.586478   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:34.586833   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:34.586861   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:34.586791   73381 retry.go:31] will retry after 1.005122465s: waiting for machine to come up
	I0425 20:02:35.593962   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:35.594367   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:35.594400   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:35.594310   73381 retry.go:31] will retry after 1.483740326s: waiting for machine to come up
	I0425 20:02:37.079306   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:37.079751   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:37.079784   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:37.079700   73381 retry.go:31] will retry after 1.828802911s: waiting for machine to come up
	I0425 20:02:38.910631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:38.911138   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:38.911163   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:38.911086   73381 retry.go:31] will retry after 1.528405609s: waiting for machine to come up
	I0425 20:02:40.441741   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:40.442251   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:40.442277   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:40.442200   73381 retry.go:31] will retry after 2.817901976s: waiting for machine to come up
	I0425 20:02:43.263903   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:43.264376   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:43.264408   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:43.264324   73381 retry.go:31] will retry after 2.258888981s: waiting for machine to come up
	I0425 20:02:45.525701   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:45.526139   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:45.526168   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:45.526106   73381 retry.go:31] will retry after 4.008258204s: waiting for machine to come up
	I0425 20:02:50.951421   72304 start.go:364] duration metric: took 4m34.5614094s to acquireMachinesLock for "default-k8s-diff-port-142196"
	I0425 20:02:50.951491   72304 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:50.951500   72304 fix.go:54] fixHost starting: 
	I0425 20:02:50.951906   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:50.951944   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:50.968074   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33481
	I0425 20:02:50.968452   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:50.968862   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:02:50.968886   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:50.969238   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:50.969460   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:02:50.969622   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:02:50.971100   72304 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142196: state=Stopped err=<nil>
	I0425 20:02:50.971125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	W0425 20:02:50.971271   72304 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:50.974623   72304 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142196" ...
	I0425 20:02:50.975991   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Start
	I0425 20:02:50.976154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring networks are active...
	I0425 20:02:50.976794   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network default is active
	I0425 20:02:50.977111   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network mk-default-k8s-diff-port-142196 is active
	I0425 20:02:50.977490   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Getting domain xml...
	I0425 20:02:50.978200   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Creating domain...
	I0425 20:02:49.538522   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.538999   72220 main.go:141] libmachine: (no-preload-744552) Found IP for machine: 192.168.72.142
	I0425 20:02:49.539033   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has current primary IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.539043   72220 main.go:141] libmachine: (no-preload-744552) Reserving static IP address...
	I0425 20:02:49.539420   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.539458   72220 main.go:141] libmachine: (no-preload-744552) DBG | skip adding static IP to network mk-no-preload-744552 - found existing host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"}
	I0425 20:02:49.539469   72220 main.go:141] libmachine: (no-preload-744552) Reserved static IP address: 192.168.72.142
	I0425 20:02:49.539483   72220 main.go:141] libmachine: (no-preload-744552) Waiting for SSH to be available...
	I0425 20:02:49.539490   72220 main.go:141] libmachine: (no-preload-744552) DBG | Getting to WaitForSSH function...
	I0425 20:02:49.541631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542042   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.542073   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542221   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH client type: external
	I0425 20:02:49.542270   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa (-rw-------)
	I0425 20:02:49.542300   72220 main.go:141] libmachine: (no-preload-744552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:02:49.542316   72220 main.go:141] libmachine: (no-preload-744552) DBG | About to run SSH command:
	I0425 20:02:49.542334   72220 main.go:141] libmachine: (no-preload-744552) DBG | exit 0
	I0425 20:02:49.670034   72220 main.go:141] libmachine: (no-preload-744552) DBG | SSH cmd err, output: <nil>: 
	I0425 20:02:49.670414   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetConfigRaw
	I0425 20:02:49.671039   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:49.673279   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673592   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.673629   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673878   72220 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/config.json ...
	I0425 20:02:49.674066   72220 machine.go:94] provisionDockerMachine start ...
	I0425 20:02:49.674083   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:49.674317   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.676767   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677084   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.677115   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677238   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.677413   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677562   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677698   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.677841   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.678037   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.678049   72220 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:02:49.790734   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:02:49.790764   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791028   72220 buildroot.go:166] provisioning hostname "no-preload-744552"
	I0425 20:02:49.791061   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791248   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.793907   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794279   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.794313   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794450   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.794649   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794787   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794908   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.795054   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.795256   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.795277   72220 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-744552 && echo "no-preload-744552" | sudo tee /etc/hostname
	I0425 20:02:49.925459   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744552
	
	I0425 20:02:49.925483   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.928282   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928646   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.928680   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928831   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.929012   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929194   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929327   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.929481   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.929679   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.929709   72220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-744552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-744552/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-744552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:02:50.052805   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:50.052841   72220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:02:50.052861   72220 buildroot.go:174] setting up certificates
	I0425 20:02:50.052875   72220 provision.go:84] configureAuth start
	I0425 20:02:50.052887   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:50.053193   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.055800   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056145   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.056168   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056339   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.058090   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058395   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.058429   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058526   72220 provision.go:143] copyHostCerts
	I0425 20:02:50.058577   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:02:50.058587   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:02:50.058647   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:02:50.058742   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:02:50.058750   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:02:50.058774   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:02:50.058827   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:02:50.058834   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:02:50.058855   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:02:50.058904   72220 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.no-preload-744552 san=[127.0.0.1 192.168.72.142 localhost minikube no-preload-744552]
	I0425 20:02:50.247711   72220 provision.go:177] copyRemoteCerts
	I0425 20:02:50.247768   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:02:50.247792   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.250146   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250560   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.250600   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250780   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.250978   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.251128   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.251272   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.338105   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:02:50.365554   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 20:02:50.391433   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:02:50.416606   72220 provision.go:87] duration metric: took 363.720332ms to configureAuth
	I0425 20:02:50.416627   72220 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:02:50.416795   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:02:50.416876   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.419385   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419731   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.419764   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419903   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.420079   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420322   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420557   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.420724   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.420909   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.420929   72220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:02:50.702065   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:02:50.702104   72220 machine.go:97] duration metric: took 1.028026584s to provisionDockerMachine
	I0425 20:02:50.702117   72220 start.go:293] postStartSetup for "no-preload-744552" (driver="kvm2")
	I0425 20:02:50.702131   72220 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:02:50.702165   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.702531   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:02:50.702572   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.705595   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.705948   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.705992   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.706173   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.706367   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.706588   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.706759   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.794791   72220 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:02:50.799592   72220 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:02:50.799621   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:02:50.799701   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:02:50.799799   72220 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:02:50.799913   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:02:50.810796   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:02:50.836919   72220 start.go:296] duration metric: took 134.787005ms for postStartSetup
	I0425 20:02:50.836972   72220 fix.go:56] duration metric: took 20.237758066s for fixHost
	I0425 20:02:50.836995   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.839818   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840295   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.840325   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840429   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.840600   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840752   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840929   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.841079   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.841307   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.841338   72220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:02:50.951251   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075370.921171901
	
	I0425 20:02:50.951272   72220 fix.go:216] guest clock: 1714075370.921171901
	I0425 20:02:50.951279   72220 fix.go:229] Guest: 2024-04-25 20:02:50.921171901 +0000 UTC Remote: 2024-04-25 20:02:50.836976462 +0000 UTC m=+282.018789867 (delta=84.195439ms)
	I0425 20:02:50.951312   72220 fix.go:200] guest clock delta is within tolerance: 84.195439ms
	I0425 20:02:50.951321   72220 start.go:83] releasing machines lock for "no-preload-744552", held for 20.352126868s
	I0425 20:02:50.951348   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.951612   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.954231   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954614   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.954638   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954821   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955240   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955419   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955492   72220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:02:50.955540   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.955659   72220 ssh_runner.go:195] Run: cat /version.json
	I0425 20:02:50.955688   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.958155   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958476   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958517   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958541   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958661   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.958808   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.958903   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958932   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.958935   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.959045   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.959181   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.959192   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.959360   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.959471   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:51.066809   72220 ssh_runner.go:195] Run: systemctl --version
	I0425 20:02:51.073198   72220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:02:51.228547   72220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:02:51.236443   72220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:02:51.236518   72220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:02:51.256226   72220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:02:51.256244   72220 start.go:494] detecting cgroup driver to use...
	I0425 20:02:51.256307   72220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:02:51.278596   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:02:51.295692   72220 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:02:51.295751   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:02:51.310940   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:02:51.326072   72220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:02:51.459064   72220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:02:51.614563   72220 docker.go:233] disabling docker service ...
	I0425 20:02:51.614639   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:02:51.638817   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:02:51.658265   72220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:02:51.818412   72220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:02:51.943830   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:02:51.960672   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:02:51.982028   72220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:02:51.982090   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:51.994990   72220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:02:51.995079   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.007907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.020225   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.033306   72220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:02:52.046241   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.058282   72220 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.078907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.090258   72220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:02:52.100796   72220 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:02:52.100873   72220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:02:52.115600   72220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:02:52.125458   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:02:52.288142   72220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:02:52.430252   72220 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:02:52.430353   72220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:02:52.436493   72220 start.go:562] Will wait 60s for crictl version
	I0425 20:02:52.436565   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.441427   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:02:52.479709   72220 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:02:52.479810   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.512180   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.545115   72220 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:02:52.546476   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:52.549314   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549723   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:52.549759   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549926   72220 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0425 20:02:52.554924   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:02:52.568804   72220 kubeadm.go:877] updating cluster {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:02:52.568958   72220 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:02:52.568997   72220 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:02:52.609095   72220 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:02:52.609117   72220 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:02:52.609156   72220 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.609188   72220 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.609185   72220 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.609214   72220 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.609227   72220 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.609256   72220 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.609334   72220 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.609370   72220 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610726   72220 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.610747   72220 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610772   72220 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.610724   72220 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.610800   72220 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.610807   72220 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.611075   72220 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.611096   72220 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.753069   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0425 20:02:52.771762   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.825052   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908030   72220 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0425 20:02:52.908082   72220 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.908113   72220 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0425 20:02:52.908127   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.908135   72220 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908164   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.915126   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.915132   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.967834   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.969385   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.973718   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0425 20:02:52.973787   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0425 20:02:52.973823   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:52.973870   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:52.985763   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.986695   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.068153   72220 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0425 20:02:53.068196   72220 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.068269   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099237   72220 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0425 20:02:53.099257   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0425 20:02:53.099274   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099290   72220 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:53.099294   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0425 20:02:53.099330   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099368   72220 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0425 20:02:53.099401   72220 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:53.099433   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099333   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.115478   72220 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0425 20:02:53.115523   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.115526   72220 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.115610   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.550328   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.240552   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting to get IP...
	I0425 20:02:52.241327   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241657   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241757   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.241648   73527 retry.go:31] will retry after 195.006273ms: waiting for machine to come up
	I0425 20:02:52.438154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438702   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438726   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.438657   73527 retry.go:31] will retry after 365.911905ms: waiting for machine to come up
	I0425 20:02:52.806281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806793   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806826   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.806727   73527 retry.go:31] will retry after 448.572137ms: waiting for machine to come up
	I0425 20:02:53.257396   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257935   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257966   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.257889   73527 retry.go:31] will retry after 560.886917ms: waiting for machine to come up
	I0425 20:02:53.820527   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820954   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820979   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.820915   73527 retry.go:31] will retry after 514.294303ms: waiting for machine to come up
	I0425 20:02:54.336706   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337129   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:54.337101   73527 retry.go:31] will retry after 853.040726ms: waiting for machine to come up
	I0425 20:02:55.192349   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192857   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:55.192774   73527 retry.go:31] will retry after 1.17554782s: waiting for machine to come up
	I0425 20:02:56.232794   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.133436829s)
	I0425 20:02:56.232845   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0425 20:02:56.232854   72220 ssh_runner.go:235] Completed: which crictl: (3.133373607s)
	I0425 20:02:56.232875   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232915   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232961   72220 ssh_runner.go:235] Completed: which crictl: (3.133515676s)
	I0425 20:02:56.232919   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:56.233011   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:56.233050   72220 ssh_runner.go:235] Completed: which crictl: (3.11742497s)
	I0425 20:02:56.233089   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:56.233126   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (3.117580594s)
	I0425 20:02:56.233160   72220 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.6828061s)
	I0425 20:02:56.233167   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0425 20:02:56.233207   72220 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0425 20:02:56.233242   72220 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:56.233248   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:56.233284   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:56.323764   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0425 20:02:56.323884   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:02:56.323906   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0425 20:02:56.323989   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:02:58.553707   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.320762887s)
	I0425 20:02:58.553742   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0425 20:02:58.553768   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.320739179s)
	I0425 20:02:58.553784   72220 ssh_runner.go:235] Completed: which crictl: (2.320487912s)
	I0425 20:02:58.553807   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0425 20:02:58.553838   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:58.553864   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.320587538s)
	I0425 20:02:58.553889   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:02:58.553909   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0425 20:02:58.553948   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.229944417s)
	I0425 20:02:58.553959   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553989   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0425 20:02:58.554009   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553910   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.23000183s)
	I0425 20:02:58.554069   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0425 20:02:58.602692   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0425 20:02:58.602694   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0425 20:02:58.602819   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:02:56.369693   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370169   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:56.370115   73527 retry.go:31] will retry after 1.260629487s: waiting for machine to come up
	I0425 20:02:57.632705   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633187   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:57.633150   73527 retry.go:31] will retry after 1.291948113s: waiting for machine to come up
	I0425 20:02:58.926675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927167   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927196   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:58.927111   73527 retry.go:31] will retry after 1.869565597s: waiting for machine to come up
	I0425 20:03:00.799357   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799820   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799850   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:00.799750   73527 retry.go:31] will retry after 2.157801293s: waiting for machine to come up
	I0425 20:03:00.027830   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.473790165s)
	I0425 20:03:00.027869   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0425 20:03:00.027895   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027943   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027842   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.424998268s)
	I0425 20:03:00.027985   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0425 20:03:02.204218   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.176247608s)
	I0425 20:03:02.204254   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0425 20:03:02.204290   72220 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.204335   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.959407   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959789   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959812   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:02.959745   73527 retry.go:31] will retry after 2.617480271s: waiting for machine to come up
	I0425 20:03:05.579300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:05.579775   73527 retry.go:31] will retry after 4.058370199s: waiting for machine to come up
	I0425 20:03:06.132743   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.928385447s)
	I0425 20:03:06.132779   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0425 20:03:06.132805   72220 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:06.132857   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:08.314803   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.181910584s)
	I0425 20:03:08.314842   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0425 20:03:08.314881   72220 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:08.314930   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:11.255486   72712 start.go:364] duration metric: took 3m53.796595105s to acquireMachinesLock for "old-k8s-version-210442"
	I0425 20:03:11.255550   72712 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:11.255569   72712 fix.go:54] fixHost starting: 
	I0425 20:03:11.256083   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:11.256128   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:11.272950   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0425 20:03:11.273365   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:11.273878   72712 main.go:141] libmachine: Using API Version  1
	I0425 20:03:11.273907   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:11.274277   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:11.274487   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:11.274666   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetState
	I0425 20:03:11.276420   72712 fix.go:112] recreateIfNeeded on old-k8s-version-210442: state=Stopped err=<nil>
	I0425 20:03:11.276454   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	W0425 20:03:11.276608   72712 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:11.279156   72712 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210442" ...
	I0425 20:03:09.639300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639833   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Found IP for machine: 192.168.39.123
	I0425 20:03:09.639867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has current primary IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserving static IP address...
	I0425 20:03:09.640257   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.640281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | skip adding static IP to network mk-default-k8s-diff-port-142196 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"}
	I0425 20:03:09.640300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserved static IP address: 192.168.39.123
	I0425 20:03:09.640313   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for SSH to be available...
	I0425 20:03:09.640321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Getting to WaitForSSH function...
	I0425 20:03:09.643058   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643371   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.643400   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643506   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH client type: external
	I0425 20:03:09.643557   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa (-rw-------)
	I0425 20:03:09.643586   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:09.643609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | About to run SSH command:
	I0425 20:03:09.643618   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | exit 0
	I0425 20:03:09.766707   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:09.767091   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetConfigRaw
	I0425 20:03:09.767818   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:09.770573   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771012   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.771047   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771296   72304 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/config.json ...
	I0425 20:03:09.771580   72304 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:09.771609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:09.771884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.774255   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.774699   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774866   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.775044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775213   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775362   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.775520   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.775781   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.775797   72304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:09.884259   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:09.884288   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884519   72304 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142196"
	I0425 20:03:09.884547   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884747   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.887391   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.887798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.887829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.888003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.888215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888542   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.888703   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.888918   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.888934   72304 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142196 && echo "default-k8s-diff-port-142196" | sudo tee /etc/hostname
	I0425 20:03:10.015919   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142196
	
	I0425 20:03:10.015951   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.018640   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.018955   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.018987   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.019201   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.019398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019729   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.019906   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.020098   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.020120   72304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142196' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142196/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142196' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:10.145789   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:10.145822   72304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:10.145873   72304 buildroot.go:174] setting up certificates
	I0425 20:03:10.145886   72304 provision.go:84] configureAuth start
	I0425 20:03:10.145899   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:10.146185   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:10.148943   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149309   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.149342   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149492   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.152000   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152418   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.152445   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152621   72304 provision.go:143] copyHostCerts
	I0425 20:03:10.152681   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:10.152693   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:10.152758   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:10.152890   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:10.152905   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:10.152940   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:10.153033   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:10.153044   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:10.153072   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:10.153145   72304 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142196 san=[127.0.0.1 192.168.39.123 default-k8s-diff-port-142196 localhost minikube]
	I0425 20:03:10.572412   72304 provision.go:177] copyRemoteCerts
	I0425 20:03:10.572473   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:10.572496   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.575083   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.575421   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.575696   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.575799   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.575916   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:10.657850   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:10.685493   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0425 20:03:10.713230   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:10.740577   72304 provision.go:87] duration metric: took 594.674196ms to configureAuth
	I0425 20:03:10.740604   72304 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:10.740835   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:10.740916   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.743709   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744039   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.744071   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744236   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.744434   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744621   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744723   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.744901   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.745065   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.745083   72304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:11.017816   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:11.017844   72304 machine.go:97] duration metric: took 1.24624593s to provisionDockerMachine
	I0425 20:03:11.017858   72304 start.go:293] postStartSetup for "default-k8s-diff-port-142196" (driver="kvm2")
	I0425 20:03:11.017871   72304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:11.017892   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.018195   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:11.018231   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.020759   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021067   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.021092   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.021403   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.021600   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.021729   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.106290   72304 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:11.111532   72304 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:11.111560   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:11.111645   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:11.111744   72304 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:11.111856   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:11.122216   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:11.150472   72304 start.go:296] duration metric: took 132.600197ms for postStartSetup
	I0425 20:03:11.150520   72304 fix.go:56] duration metric: took 20.199020729s for fixHost
	I0425 20:03:11.150544   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.153466   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.153798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.153824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.154055   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.154289   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154483   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154635   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.154824   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:11.154991   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:11.155001   72304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:11.255330   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075391.221756501
	
	I0425 20:03:11.255357   72304 fix.go:216] guest clock: 1714075391.221756501
	I0425 20:03:11.255365   72304 fix.go:229] Guest: 2024-04-25 20:03:11.221756501 +0000 UTC Remote: 2024-04-25 20:03:11.15052524 +0000 UTC m=+294.908822896 (delta=71.231261ms)
	I0425 20:03:11.255384   72304 fix.go:200] guest clock delta is within tolerance: 71.231261ms
	I0425 20:03:11.255388   72304 start.go:83] releasing machines lock for "default-k8s-diff-port-142196", held for 20.303917474s
	I0425 20:03:11.255419   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.255700   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:11.258740   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259076   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.259104   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259414   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.259906   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260102   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260197   72304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:11.260241   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.260350   72304 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:11.260374   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.262843   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263001   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263216   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263245   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263365   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263480   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263669   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263679   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263864   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264026   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264039   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.264203   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.280701   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .Start
	I0425 20:03:11.280895   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring networks are active...
	I0425 20:03:11.281729   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network default is active
	I0425 20:03:11.282158   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network mk-old-k8s-version-210442 is active
	I0425 20:03:11.282639   72712 main.go:141] libmachine: (old-k8s-version-210442) Getting domain xml...
	I0425 20:03:11.283399   72712 main.go:141] libmachine: (old-k8s-version-210442) Creating domain...
	I0425 20:03:11.339564   72304 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:11.364667   72304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:11.526308   72304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:11.533487   72304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:11.533563   72304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:11.552090   72304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:11.552120   72304 start.go:494] detecting cgroup driver to use...
	I0425 20:03:11.552196   72304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:11.569573   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:11.584425   72304 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:11.584489   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:11.599083   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:11.613739   72304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:11.739574   72304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:11.911318   72304 docker.go:233] disabling docker service ...
	I0425 20:03:11.911390   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:11.928743   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:11.946101   72304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:12.112740   72304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:12.246863   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:12.269551   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:12.298838   72304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:12.298907   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.312059   72304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:12.312113   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.324076   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.336239   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.350088   72304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:12.368362   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.385406   72304 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.407195   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.420065   72304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:12.431195   72304 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:12.431260   72304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:12.446263   72304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:12.457137   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:12.622756   72304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:12.799932   72304 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:12.800012   72304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:12.807795   72304 start.go:562] Will wait 60s for crictl version
	I0425 20:03:12.807862   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:03:12.813860   72304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:12.861249   72304 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:12.861327   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.896140   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.942768   72304 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:09.079550   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0425 20:03:09.079607   72220 cache_images.go:123] Successfully loaded all cached images
	I0425 20:03:09.079615   72220 cache_images.go:92] duration metric: took 16.470485982s to LoadCachedImages
	I0425 20:03:09.079629   72220 kubeadm.go:928] updating node { 192.168.72.142 8443 v1.30.0 crio true true} ...
	I0425 20:03:09.079764   72220 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-744552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:09.079839   72220 ssh_runner.go:195] Run: crio config
	I0425 20:03:09.139170   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:09.139194   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:09.139206   72220 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:09.139225   72220 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.142 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-744552 NodeName:no-preload-744552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:09.139365   72220 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-744552"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:09.139426   72220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:09.151828   72220 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:09.151884   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:09.163310   72220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0425 20:03:09.183132   72220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:09.203038   72220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0425 20:03:09.223717   72220 ssh_runner.go:195] Run: grep 192.168.72.142	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:09.228467   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:09.243976   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:09.361475   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:09.380862   72220 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552 for IP: 192.168.72.142
	I0425 20:03:09.380886   72220 certs.go:194] generating shared ca certs ...
	I0425 20:03:09.380901   72220 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:09.381076   72220 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:09.381132   72220 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:09.381147   72220 certs.go:256] generating profile certs ...
	I0425 20:03:09.381254   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/client.key
	I0425 20:03:09.381337   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key.a705cb96
	I0425 20:03:09.381392   72220 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key
	I0425 20:03:09.381538   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:09.381586   72220 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:09.381601   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:09.381638   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:09.381668   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:09.381702   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:09.381761   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:09.382459   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:09.423895   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:09.462481   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:09.491394   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:09.532779   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 20:03:09.569107   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 20:03:09.597381   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:09.623962   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:09.651141   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:09.677295   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:09.702404   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:09.729275   72220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:09.748421   72220 ssh_runner.go:195] Run: openssl version
	I0425 20:03:09.754848   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:09.768121   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774468   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774529   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.783568   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:09.799120   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:09.812983   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818660   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818740   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.826091   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:09.840115   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:09.853372   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858387   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858455   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.864693   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:09.876755   72220 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:09.882829   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:09.890219   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:09.897091   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:09.906017   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:09.913154   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:09.919989   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:09.926552   72220 kubeadm.go:391] StartCluster: {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:09.926671   72220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:09.926734   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:09.971983   72220 cri.go:89] found id: ""
	I0425 20:03:09.972071   72220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:09.983371   72220 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:09.983399   72220 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:09.983406   72220 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:09.983451   72220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:09.994047   72220 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:09.995080   72220 kubeconfig.go:125] found "no-preload-744552" server: "https://192.168.72.142:8443"
	I0425 20:03:09.997202   72220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:10.007666   72220 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.142
	I0425 20:03:10.007703   72220 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:10.007713   72220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:10.007752   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:10.049581   72220 cri.go:89] found id: ""
	I0425 20:03:10.049679   72220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:10.071032   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:10.083240   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:10.083267   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:10.083314   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:10.093444   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:10.093507   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:10.104291   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:10.114596   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:10.114659   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:10.125118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.138299   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:10.138362   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.152185   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:10.163493   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:10.163555   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:10.177214   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:10.188286   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:10.312536   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.497483   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.184911769s)
	I0425 20:03:11.497531   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.753732   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.871246   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.968366   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:11.968445   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.468885   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.968598   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:13.037502   72220 api_server.go:72] duration metric: took 1.069135698s to wait for apiserver process to appear ...
	I0425 20:03:13.037542   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:13.037568   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:13.038540   72220 api_server.go:269] stopped: https://192.168.72.142:8443/healthz: Get "https://192.168.72.142:8443/healthz": dial tcp 192.168.72.142:8443: connect: connection refused
	I0425 20:03:13.537713   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:12.944206   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:12.947412   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.947822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:12.947852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.948086   72304 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:12.953504   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:12.969171   72304 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:12.969344   72304 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:12.969402   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:13.016509   72304 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:13.016585   72304 ssh_runner.go:195] Run: which lz4
	I0425 20:03:13.022023   72304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:13.027861   72304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:13.027896   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:14.913405   72304 crio.go:462] duration metric: took 1.891428846s to copy over tarball
	I0425 20:03:14.913466   72304 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:12.659136   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting to get IP...
	I0425 20:03:12.660227   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.660770   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.660843   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.660724   73691 retry.go:31] will retry after 234.96602ms: waiting for machine to come up
	I0425 20:03:12.897395   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.897966   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.897993   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.897913   73691 retry.go:31] will retry after 387.692223ms: waiting for machine to come up
	I0425 20:03:13.287742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.288414   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.288443   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.288397   73691 retry.go:31] will retry after 461.897892ms: waiting for machine to come up
	I0425 20:03:13.752061   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.752574   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.752603   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.752513   73691 retry.go:31] will retry after 452.347315ms: waiting for machine to come up
	I0425 20:03:14.206275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.206684   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.206708   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.206629   73691 retry.go:31] will retry after 466.12355ms: waiting for machine to come up
	I0425 20:03:14.674265   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.674788   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.674818   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.674735   73691 retry.go:31] will retry after 697.70071ms: waiting for machine to come up
	I0425 20:03:15.373862   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:15.374297   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:15.374325   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:15.374252   73691 retry.go:31] will retry after 835.73273ms: waiting for machine to come up
	I0425 20:03:16.211394   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:16.211870   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:16.211902   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:16.211815   73691 retry.go:31] will retry after 1.26739043s: waiting for machine to come up
	I0425 20:03:16.441793   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.441829   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.441848   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.506023   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.506057   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.538293   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.544891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:16.544925   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.038519   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.049842   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.049883   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.538420   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.545891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.545929   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:18.038192   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:18.042957   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:03:18.063131   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:18.063171   72220 api_server.go:131] duration metric: took 5.025619242s to wait for apiserver health ...
	I0425 20:03:18.063182   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:18.063192   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:18.405047   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:18.552639   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:18.565507   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:18.591534   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:17.662135   72304 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.748640149s)
	I0425 20:03:17.662171   72304 crio.go:469] duration metric: took 2.748741671s to extract the tarball
	I0425 20:03:17.662184   72304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:17.706288   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:17.773537   72304 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:03:17.773565   72304 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:03:17.773575   72304 kubeadm.go:928] updating node { 192.168.39.123 8444 v1.30.0 crio true true} ...
	I0425 20:03:17.773709   72304 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142196 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:17.773799   72304 ssh_runner.go:195] Run: crio config
	I0425 20:03:17.836354   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:17.836379   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:17.836391   72304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:17.836411   72304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142196 NodeName:default-k8s-diff-port-142196 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:17.836545   72304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142196"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:17.836599   72304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:17.848441   72304 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:17.848506   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:17.860320   72304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0425 20:03:17.885528   72304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:17.905701   72304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0425 20:03:17.925064   72304 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:17.930085   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:17.944507   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:18.108208   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:18.134428   72304 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196 for IP: 192.168.39.123
	I0425 20:03:18.134456   72304 certs.go:194] generating shared ca certs ...
	I0425 20:03:18.134479   72304 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:18.134672   72304 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:18.134745   72304 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:18.134761   72304 certs.go:256] generating profile certs ...
	I0425 20:03:18.134870   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/client.key
	I0425 20:03:18.245553   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key.1fb61bcb
	I0425 20:03:18.245666   72304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key
	I0425 20:03:18.245833   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:18.245880   72304 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:18.245894   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:18.245934   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:18.245964   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:18.245997   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:18.246058   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:18.246994   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:18.293000   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:18.322296   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:18.358060   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:18.390999   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0425 20:03:18.420333   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:18.450050   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:18.477983   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:18.506030   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:18.538394   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:18.574361   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:18.610827   72304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:18.634141   72304 ssh_runner.go:195] Run: openssl version
	I0425 20:03:18.640647   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:18.653988   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659400   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659458   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.665868   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:18.679247   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:18.692272   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697356   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697410   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.703694   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:18.716412   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:18.733362   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739598   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739651   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.748175   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:18.764492   72304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:18.770594   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:18.777414   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:18.784614   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:18.793453   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:18.800721   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:18.807982   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:18.814836   72304 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:18.814942   72304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:18.814992   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.864771   72304 cri.go:89] found id: ""
	I0425 20:03:18.864834   72304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:18.878200   72304 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:18.878238   72304 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:18.878245   72304 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:18.878305   72304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:18.892071   72304 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:18.892973   72304 kubeconfig.go:125] found "default-k8s-diff-port-142196" server: "https://192.168.39.123:8444"
	I0425 20:03:18.894860   72304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:18.907959   72304 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.123
	I0425 20:03:18.907989   72304 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:18.907998   72304 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:18.908045   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.951245   72304 cri.go:89] found id: ""
	I0425 20:03:18.951311   72304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:18.980033   72304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:18.995453   72304 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:18.995473   72304 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:18.995524   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0425 20:03:19.007409   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:19.007470   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:19.019782   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0425 20:03:19.031410   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:19.031493   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:19.043439   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.055936   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:19.055999   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.067986   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0425 20:03:19.080785   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:19.080869   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:19.092802   72304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:19.105024   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.240077   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.259510   72304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.019382485s)
	I0425 20:03:20.259544   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.489833   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.599319   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.784451   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:20.784606   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.284759   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:17.480654   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:17.481045   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:17.481094   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:17.481007   73691 retry.go:31] will retry after 1.238487953s: waiting for machine to come up
	I0425 20:03:18.720512   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:18.720940   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:18.720965   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:18.720902   73691 retry.go:31] will retry after 2.277078909s: waiting for machine to come up
	I0425 20:03:20.999749   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:21.000275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:21.000305   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:21.000223   73691 retry.go:31] will retry after 2.81059851s: waiting for machine to come up
	I0425 20:03:18.940880   72220 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:18.983894   72220 system_pods.go:61] "coredns-7db6d8ff4d-67sp6" [0fc3ee18-e3fe-4f4a-a5bd-4d6e3497bfa3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:18.983953   72220 system_pods.go:61] "etcd-no-preload-744552" [f3768d08-4cc6-42aa-9d1c-b0fd5d6ffed5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:18.983975   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [9d927e1f-4ddb-4b54-b1f1-f5248cb51745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:18.983984   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [cc71ce6c-22ba-4189-99dc-dd2da6506d37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:18.983993   72220 system_pods.go:61] "kube-proxy-whkbk" [a22b51a9-4854-41f5-bb5a-a81920a09b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0425 20:03:18.984026   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [5f01cd76-d6b7-4033-9aa9-38cac91965d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:18.984037   72220 system_pods.go:61] "metrics-server-569cc877fc-6n2gd" [03283a78-d44f-4f60-9743-680c18aeace3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:18.984052   72220 system_pods.go:61] "storage-provisioner" [4211811e-85ce-4da2-bc16-16909c26ced7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0425 20:03:18.984064   72220 system_pods.go:74] duration metric: took 392.509163ms to wait for pod list to return data ...
	I0425 20:03:18.984077   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:18.989373   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:18.989405   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:18.989424   72220 node_conditions.go:105] duration metric: took 5.341625ms to run NodePressure ...
	I0425 20:03:18.989446   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.809313   72220 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818730   72220 kubeadm.go:733] kubelet initialised
	I0425 20:03:19.818753   72220 kubeadm.go:734] duration metric: took 9.41696ms waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818761   72220 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:19.825762   72220 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:21.834658   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:21.785434   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.855046   72304 api_server.go:72] duration metric: took 1.070594042s to wait for apiserver process to appear ...
	I0425 20:03:21.855127   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:21.855156   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:21.855709   72304 api_server.go:269] stopped: https://192.168.39.123:8444/healthz: Get "https://192.168.39.123:8444/healthz": dial tcp 192.168.39.123:8444: connect: connection refused
	I0425 20:03:22.355555   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.430068   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.430099   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.430115   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.487089   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.487124   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.855301   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.861270   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:24.861299   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.356007   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.360802   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.360839   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.855336   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.861719   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.861753   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:23.812963   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:23.813457   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:23.813476   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:23.813429   73691 retry.go:31] will retry after 2.508562986s: waiting for machine to come up
	I0425 20:03:26.323267   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:26.323733   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:26.323761   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:26.323699   73691 retry.go:31] will retry after 4.475703543s: waiting for machine to come up
	I0425 20:03:26.355254   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.360977   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.361011   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:26.855547   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.860178   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.860203   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.355819   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.360466   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:27.360491   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.856219   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.861706   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:03:27.868486   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:27.868525   72304 api_server.go:131] duration metric: took 6.013385579s to wait for apiserver health ...
	I0425 20:03:27.868536   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:27.868544   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:27.870534   72304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:24.335382   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:24.335415   72220 pod_ready.go:81] duration metric: took 4.509621487s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:24.335427   72220 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:26.342530   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:28.841444   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:27.871863   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:27.885767   72304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:27.910270   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:27.922984   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:27.923016   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:27.923024   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:27.923030   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:27.923036   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:27.923041   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:03:27.923052   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:27.923057   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:27.923061   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:03:27.923067   72304 system_pods.go:74] duration metric: took 12.774358ms to wait for pod list to return data ...
	I0425 20:03:27.923073   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:27.927553   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:27.927582   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:27.927596   72304 node_conditions.go:105] duration metric: took 4.517775ms to run NodePressure ...
	I0425 20:03:27.927616   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:28.213013   72304 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217836   72304 kubeadm.go:733] kubelet initialised
	I0425 20:03:28.217860   72304 kubeadm.go:734] duration metric: took 4.809ms waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217869   72304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:28.225122   72304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.229920   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229940   72304 pod_ready.go:81] duration metric: took 4.794976ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.229948   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229954   72304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.234362   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234380   72304 pod_ready.go:81] duration metric: took 4.417955ms for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.234388   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234394   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.238885   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238904   72304 pod_ready.go:81] duration metric: took 4.504378ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.238917   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238924   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.314420   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314446   72304 pod_ready.go:81] duration metric: took 75.511589ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.314457   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314464   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.714128   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714165   72304 pod_ready.go:81] duration metric: took 399.694231ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.714178   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714187   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.113925   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113958   72304 pod_ready.go:81] duration metric: took 399.760651ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.113971   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113977   72304 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.514107   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514132   72304 pod_ready.go:81] duration metric: took 400.147308ms for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.514142   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514149   72304 pod_ready.go:38] duration metric: took 1.296270699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:29.514167   72304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:03:29.528766   72304 ops.go:34] apiserver oom_adj: -16
	I0425 20:03:29.528791   72304 kubeadm.go:591] duration metric: took 10.650540723s to restartPrimaryControlPlane
	I0425 20:03:29.528801   72304 kubeadm.go:393] duration metric: took 10.713975851s to StartCluster
	I0425 20:03:29.528816   72304 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.528887   72304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:29.530674   72304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.530951   72304 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:03:29.532792   72304 out.go:177] * Verifying Kubernetes components...
	I0425 20:03:29.531039   72304 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:03:29.531203   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:29.534328   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:29.534349   72304 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534377   72304 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534383   72304 addons.go:243] addon metrics-server should already be in state true
	I0425 20:03:29.534331   72304 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534416   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534441   72304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142196"
	I0425 20:03:29.534334   72304 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534536   72304 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534549   72304 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:03:29.534584   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534786   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534814   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534839   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534815   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534956   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.535000   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.551165   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0425 20:03:29.551680   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552007   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0425 20:03:29.552399   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.552419   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.552445   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552864   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553003   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.553028   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.553066   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.553409   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553621   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0425 20:03:29.554006   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.554024   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.554057   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.554555   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.554579   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.554908   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.555432   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.555487   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.557216   72304 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.557238   72304 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:03:29.557267   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.557642   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.557675   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.570559   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0425 20:03:29.571013   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.571538   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.571562   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.571944   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.572152   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.574003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.576061   72304 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:03:29.575108   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33777
	I0425 20:03:29.575580   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0425 20:03:29.577356   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:03:29.577374   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:03:29.577394   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.577861   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.577964   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.578333   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578356   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578514   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578543   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578735   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578909   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578947   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.579603   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.579633   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.580871   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.582436   72304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:29.581297   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.581851   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.583941   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.583971   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.583994   72304 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.584021   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:03:29.584031   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.584044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.584282   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.584430   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.586538   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.586880   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.586901   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.587119   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.587314   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.587470   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.587560   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.595882   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0425 20:03:29.596234   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.596711   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.596728   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.597146   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.597321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.598599   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.598799   72304 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:29.598811   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:03:29.598822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.600829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.601149   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.601409   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.601479   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.601537   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.772228   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:29.799159   72304 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:29.893622   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:03:29.893647   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:03:29.895090   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.919651   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:03:29.919673   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:03:29.929992   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:30.004488   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:30.004519   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:03:30.061525   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.113425632s)
	I0425 20:03:31.043511   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.148338843s)
	I0425 20:03:31.043539   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043587   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043524   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043629   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043894   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043910   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043946   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.043953   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043964   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043973   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043992   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044107   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044159   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044199   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044209   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044219   72304 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-142196"
	I0425 20:03:31.044216   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044237   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044253   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044262   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044542   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044566   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044662   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044682   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.052429   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.052451   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.052675   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.052694   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.055680   72304 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0425 20:03:31.057271   72304 addons.go:505] duration metric: took 1.526243989s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0425 20:03:32.187768   71966 start.go:364] duration metric: took 56.585448027s to acquireMachinesLock for "embed-certs-512173"
	I0425 20:03:32.187838   71966 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:32.187849   71966 fix.go:54] fixHost starting: 
	I0425 20:03:32.188220   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:32.188266   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:32.207172   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0425 20:03:32.207627   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:32.208170   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:03:32.208196   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:32.208493   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:32.208700   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:32.208837   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:03:32.210552   71966 fix.go:112] recreateIfNeeded on embed-certs-512173: state=Stopped err=<nil>
	I0425 20:03:32.210577   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	W0425 20:03:32.210741   71966 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:32.213400   71966 out.go:177] * Restarting existing kvm2 VM for "embed-certs-512173" ...
	I0425 20:03:30.803467   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804014   72712 main.go:141] libmachine: (old-k8s-version-210442) Found IP for machine: 192.168.61.136
	I0425 20:03:30.804041   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserving static IP address...
	I0425 20:03:30.804057   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has current primary IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804495   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.804535   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | skip adding static IP to network mk-old-k8s-version-210442 - found existing host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"}
	I0425 20:03:30.804562   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserved static IP address: 192.168.61.136
	I0425 20:03:30.804582   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting for SSH to be available...
	I0425 20:03:30.804599   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Getting to WaitForSSH function...
	I0425 20:03:30.807110   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807533   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.807556   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807706   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH client type: external
	I0425 20:03:30.807725   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa (-rw-------)
	I0425 20:03:30.807767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:30.807783   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | About to run SSH command:
	I0425 20:03:30.807815   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | exit 0
	I0425 20:03:30.935091   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:30.935445   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetConfigRaw
	I0425 20:03:30.936168   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:30.938767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939193   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.939246   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939428   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 20:03:30.939630   72712 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:30.939649   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:30.939870   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:30.942320   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.942771   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942923   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:30.943113   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943306   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943468   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:30.943640   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:30.943842   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:30.943854   72712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:31.052598   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:31.052625   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.052821   72712 buildroot.go:166] provisioning hostname "old-k8s-version-210442"
	I0425 20:03:31.052844   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.053080   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.056324   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056713   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.056745   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056885   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.057056   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057190   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057375   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.057549   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.057724   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.057742   72712 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210442 && echo "old-k8s-version-210442" | sudo tee /etc/hostname
	I0425 20:03:31.188461   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210442
	
	I0425 20:03:31.188494   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.191628   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192088   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.192117   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192332   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.192519   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192655   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192767   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.192944   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.193142   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.193167   72712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:31.317374   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:31.317402   72712 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:31.317436   72712 buildroot.go:174] setting up certificates
	I0425 20:03:31.317447   72712 provision.go:84] configureAuth start
	I0425 20:03:31.317461   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.317778   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:31.321012   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321388   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.321421   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321698   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.323976   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324326   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.324354   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324523   72712 provision.go:143] copyHostCerts
	I0425 20:03:31.324573   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:31.324584   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:31.324656   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:31.324764   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:31.324778   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:31.324807   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:31.324879   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:31.324890   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:31.324915   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:31.324978   72712 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210442 san=[127.0.0.1 192.168.61.136 localhost minikube old-k8s-version-210442]
	I0425 20:03:31.410674   72712 provision.go:177] copyRemoteCerts
	I0425 20:03:31.410728   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:31.410755   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.413170   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413449   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.413491   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413634   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.413832   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.413988   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.414156   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:31.502759   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:31.536662   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0425 20:03:31.565106   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:31.593254   72712 provision.go:87] duration metric: took 275.793443ms to configureAuth
	I0425 20:03:31.593287   72712 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:31.593621   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 20:03:31.593720   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.596515   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.596827   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.596859   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.597057   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.597287   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597448   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597624   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.597775   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.597927   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.597942   72712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:31.925149   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:31.925182   72712 machine.go:97] duration metric: took 985.540626ms to provisionDockerMachine
	I0425 20:03:31.925199   72712 start.go:293] postStartSetup for "old-k8s-version-210442" (driver="kvm2")
	I0425 20:03:31.925211   72712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:31.925258   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:31.925560   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:31.925596   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.928532   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.928982   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.929013   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.929232   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.929458   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.929637   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.929787   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.023009   72712 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:32.029391   72712 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:32.029426   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:32.029508   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:32.029576   72712 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:32.029664   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:32.046596   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:32.077323   72712 start.go:296] duration metric: took 152.112632ms for postStartSetup
	I0425 20:03:32.077396   72712 fix.go:56] duration metric: took 20.821829703s for fixHost
	I0425 20:03:32.077425   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.080136   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080477   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.080526   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080636   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.080836   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081067   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081283   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.081493   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:32.081695   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:32.081711   72712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:32.187617   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075412.163072845
	
	I0425 20:03:32.187642   72712 fix.go:216] guest clock: 1714075412.163072845
	I0425 20:03:32.187652   72712 fix.go:229] Guest: 2024-04-25 20:03:32.163072845 +0000 UTC Remote: 2024-04-25 20:03:32.07740605 +0000 UTC m=+254.767943919 (delta=85.666795ms)
	I0425 20:03:32.187675   72712 fix.go:200] guest clock delta is within tolerance: 85.666795ms
	I0425 20:03:32.187682   72712 start.go:83] releasing machines lock for "old-k8s-version-210442", held for 20.932154384s
	I0425 20:03:32.187709   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.187998   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:32.190538   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.190898   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.190932   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.191077   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191817   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191996   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.192076   72712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:32.192116   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.192208   72712 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:32.192230   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.194821   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.194988   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195191   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195212   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195334   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195368   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195500   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195673   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195677   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195847   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195866   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196063   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.196083   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196219   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.276462   72712 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:32.300979   72712 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:30.842282   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:32.843750   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.843779   72220 pod_ready.go:81] duration metric: took 8.508343704s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.843791   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850293   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.850316   72220 pod_ready.go:81] duration metric: took 6.517764ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850327   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855621   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.855657   72220 pod_ready.go:81] duration metric: took 5.31225ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855671   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860450   72220 pod_ready.go:92] pod "kube-proxy-whkbk" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.860483   72220 pod_ready.go:81] duration metric: took 4.797706ms for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860505   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865268   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.865286   72220 pod_ready.go:81] duration metric: took 4.774354ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865294   72220 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.458446   72712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:32.465434   72712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:32.465518   72712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:32.486929   72712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:32.486954   72712 start.go:494] detecting cgroup driver to use...
	I0425 20:03:32.487019   72712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:32.509425   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:32.530999   72712 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:32.531059   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:32.547280   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:32.563594   72712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:32.699207   72712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:32.875013   72712 docker.go:233] disabling docker service ...
	I0425 20:03:32.875096   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:32.897149   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:32.916105   72712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:33.071143   72712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:33.231529   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:33.252919   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:33.277388   72712 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0425 20:03:33.277457   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.290889   72712 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:33.290953   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.305488   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.319263   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.332961   72712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:33.354086   72712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:33.373431   72712 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:33.373517   72712 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:33.398458   72712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:33.418683   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:33.595555   72712 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:33.808015   72712 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:33.810391   72712 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:33.817593   72712 start.go:562] Will wait 60s for crictl version
	I0425 20:03:33.817646   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:33.823381   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:33.866310   72712 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:33.866411   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.905561   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.952764   72712 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0425 20:03:32.214679   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Start
	I0425 20:03:32.214880   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring networks are active...
	I0425 20:03:32.215746   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network default is active
	I0425 20:03:32.216106   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network mk-embed-certs-512173 is active
	I0425 20:03:32.216566   71966 main.go:141] libmachine: (embed-certs-512173) Getting domain xml...
	I0425 20:03:32.217397   71966 main.go:141] libmachine: (embed-certs-512173) Creating domain...
	I0425 20:03:33.554665   71966 main.go:141] libmachine: (embed-certs-512173) Waiting to get IP...
	I0425 20:03:33.555670   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.556123   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.556186   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.556089   73884 retry.go:31] will retry after 278.996701ms: waiting for machine to come up
	I0425 20:03:33.836750   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.837273   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.837301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.837244   73884 retry.go:31] will retry after 324.410317ms: waiting for machine to come up
	I0425 20:03:34.163017   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.163490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.163518   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.163457   73884 retry.go:31] will retry after 403.985826ms: waiting for machine to come up
	I0425 20:03:34.568824   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.569364   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.569397   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.569330   73884 retry.go:31] will retry after 427.12179ms: waiting for machine to come up
	I0425 20:03:34.998092   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.998684   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.998709   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.998646   73884 retry.go:31] will retry after 710.71475ms: waiting for machine to come up
	I0425 20:03:35.710643   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:35.711707   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:35.711736   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:35.711616   73884 retry.go:31] will retry after 806.283051ms: waiting for machine to come up
	I0425 20:03:31.803034   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:33.813548   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:35.304283   72304 node_ready.go:49] node "default-k8s-diff-port-142196" has status "Ready":"True"
	I0425 20:03:35.304311   72304 node_ready.go:38] duration metric: took 5.505123781s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:35.304323   72304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:35.311480   72304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320910   72304 pod_ready.go:92] pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:35.320938   72304 pod_ready.go:81] duration metric: took 9.425507ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320953   72304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:33.954161   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:33.957316   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.957778   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:33.957811   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.958080   72712 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:33.964467   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:33.984277   72712 kubeadm.go:877] updating cluster {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:33.984437   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 20:03:33.984499   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:34.049402   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:34.049479   72712 ssh_runner.go:195] Run: which lz4
	I0425 20:03:34.055519   72712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:34.061481   72712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:34.061522   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0425 20:03:36.271646   72712 crio.go:462] duration metric: took 2.216165414s to copy over tarball
	I0425 20:03:36.271722   72712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:34.877483   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:37.373822   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:36.519514   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:36.520052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:36.520085   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:36.519968   73884 retry.go:31] will retry after 990.986618ms: waiting for machine to come up
	I0425 20:03:37.513151   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:37.513636   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:37.513669   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:37.513574   73884 retry.go:31] will retry after 1.371471682s: waiting for machine to come up
	I0425 20:03:38.886926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:38.887491   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:38.887527   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:38.887415   73884 retry.go:31] will retry after 1.633505345s: waiting for machine to come up
	I0425 20:03:40.523438   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:40.523975   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:40.524004   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:40.523926   73884 retry.go:31] will retry after 2.280577933s: waiting for machine to come up
	I0425 20:03:37.330040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.350040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.894331   72712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.622580176s)
	I0425 20:03:39.894364   72712 crio.go:469] duration metric: took 3.62268463s to extract the tarball
	I0425 20:03:39.894373   72712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:39.965071   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:40.009534   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:40.009561   72712 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:03:40.009629   72712 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.009651   72712 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.009677   72712 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.009662   72712 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.009794   72712 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.009920   72712 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.010033   72712 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.010241   72712 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0425 20:03:40.011305   72712 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.011334   72712 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.011346   72712 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.011686   72712 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.012422   72712 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.012429   72712 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.012437   72712 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0425 20:03:40.012546   72712 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.143545   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.155203   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.157842   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.158081   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.161210   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.166515   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.181859   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0425 20:03:40.301699   72712 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0425 20:03:40.301759   72712 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.301805   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.379386   72712 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0425 20:03:40.379445   72712 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.379490   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406119   72712 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0425 20:03:40.406231   72712 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.406174   72712 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0425 20:03:40.406338   72712 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.406365   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406389   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420450   72712 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0425 20:03:40.420495   72712 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.420548   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420461   72712 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0425 20:03:40.420629   72712 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.420677   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430055   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.430110   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.430232   72712 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0425 20:03:40.430263   72712 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0425 20:03:40.430274   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.430277   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.430303   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430326   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.430389   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.582980   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0425 20:03:40.583094   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0425 20:03:40.587500   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0425 20:03:40.587564   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0425 20:03:40.587579   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0425 20:03:40.587650   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0425 20:03:40.587697   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0425 20:03:40.625942   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0425 20:03:40.941957   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:41.096086   72712 cache_images.go:92] duration metric: took 1.086507707s to LoadCachedImages
	W0425 20:03:41.096249   72712 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0425 20:03:41.096279   72712 kubeadm.go:928] updating node { 192.168.61.136 8443 v1.20.0 crio true true} ...
	I0425 20:03:41.096415   72712 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210442 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:41.096509   72712 ssh_runner.go:195] Run: crio config
	I0425 20:03:41.169311   72712 cni.go:84] Creating CNI manager for ""
	I0425 20:03:41.169341   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:41.169357   72712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:41.169397   72712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210442 NodeName:old-k8s-version-210442 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0425 20:03:41.169570   72712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210442"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:41.169639   72712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0425 20:03:41.182191   72712 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:41.182283   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:41.193546   72712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0425 20:03:41.218220   72712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:41.238647   72712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0425 20:03:41.259040   72712 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:41.263603   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:41.278007   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:41.425587   72712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:41.450990   72712 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442 for IP: 192.168.61.136
	I0425 20:03:41.451013   72712 certs.go:194] generating shared ca certs ...
	I0425 20:03:41.451034   72712 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:41.451225   72712 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:41.451307   72712 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:41.451323   72712 certs.go:256] generating profile certs ...
	I0425 20:03:41.451449   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.key
	I0425 20:03:41.451528   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key.1533c9ac
	I0425 20:03:41.451587   72712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key
	I0425 20:03:41.451789   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:41.451860   72712 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:41.451880   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:41.451915   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:41.451945   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:41.451968   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:41.452023   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:41.452870   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:41.510467   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:41.555595   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:41.606059   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:41.648206   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0425 20:03:41.690090   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:41.727674   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:41.766537   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:41.799524   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:41.828668   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:41.860964   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:41.890272   72712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:41.911787   72712 ssh_runner.go:195] Run: openssl version
	I0425 20:03:41.918926   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:41.933194   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.938995   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.939060   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.945934   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:41.959859   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:41.974906   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.980931   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.981006   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.987789   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:42.002455   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:42.016797   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023789   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023853   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.033189   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:42.047467   72712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:42.053552   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:42.063130   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:42.070290   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:42.079527   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:42.087983   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:42.096658   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:42.103477   72712 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:42.103596   72712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:42.103649   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.155980   72712 cri.go:89] found id: ""
	I0425 20:03:42.156085   72712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:42.172499   72712 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:42.172525   72712 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:42.172532   72712 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:42.172580   72712 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:42.187864   72712 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:42.188948   72712 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210442" does not appear in /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:42.189659   72712 kubeconfig.go:62] /home/jenkins/minikube-integration/18757-6355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210442" cluster setting kubeconfig missing "old-k8s-version-210442" context setting]
	I0425 20:03:42.190635   72712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:42.192402   72712 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:42.207284   72712 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.136
	I0425 20:03:42.207318   72712 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:42.207329   72712 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:42.207403   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.251184   72712 cri.go:89] found id: ""
	I0425 20:03:42.251257   72712 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:42.271727   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:42.289161   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:42.289184   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:42.289237   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:42.302492   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:42.302588   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:42.317790   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:42.329940   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:42.330002   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:42.342772   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:39.375028   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:41.871821   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.805640   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:42.806121   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:42.806148   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:42.806072   73884 retry.go:31] will retry after 2.588054599s: waiting for machine to come up
	I0425 20:03:45.395282   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:45.395712   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:45.395759   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:45.395662   73884 retry.go:31] will retry after 3.473643777s: waiting for machine to come up
	I0425 20:03:41.329479   72304 pod_ready.go:92] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.329511   72304 pod_ready.go:81] duration metric: took 6.008549199s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.329523   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335660   72304 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.335688   72304 pod_ready.go:81] duration metric: took 6.15557ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335700   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341409   72304 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.341433   72304 pod_ready.go:81] duration metric: took 5.723469ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341446   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347145   72304 pod_ready.go:92] pod "kube-proxy-bqmtp" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.347167   72304 pod_ready.go:81] duration metric: took 5.713095ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347179   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376913   72304 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.376939   72304 pod_ready.go:81] duration metric: took 29.751827ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376951   72304 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:43.383378   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:45.884869   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.356480   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:42.357280   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:42.370403   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:42.384245   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:42.384332   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:42.398271   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:42.412361   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:42.575076   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.186458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.480114   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.594128   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.707129   72712 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:43.707221   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.207406   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.707733   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.208100   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.708041   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.207966   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.707255   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:47.207754   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:43.873747   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:46.374439   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:48.871928   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:48.872457   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:48.872490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:48.872393   73884 retry.go:31] will retry after 4.148424216s: waiting for machine to come up
	I0425 20:03:48.384599   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.883246   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:47.707730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.208213   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.707685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.207879   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.707914   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.208278   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.707691   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.207600   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.707365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:52.207931   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.872282   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.872356   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.874452   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:53.022813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023343   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has current primary IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023367   71966 main.go:141] libmachine: (embed-certs-512173) Found IP for machine: 192.168.50.7
	I0425 20:03:53.023381   71966 main.go:141] libmachine: (embed-certs-512173) Reserving static IP address...
	I0425 20:03:53.023750   71966 main.go:141] libmachine: (embed-certs-512173) Reserved static IP address: 192.168.50.7
	I0425 20:03:53.023770   71966 main.go:141] libmachine: (embed-certs-512173) Waiting for SSH to be available...
	I0425 20:03:53.023791   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.023827   71966 main.go:141] libmachine: (embed-certs-512173) DBG | skip adding static IP to network mk-embed-certs-512173 - found existing host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"}
	I0425 20:03:53.023848   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Getting to WaitForSSH function...
	I0425 20:03:53.025753   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.026132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026244   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH client type: external
	I0425 20:03:53.026268   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa (-rw-------)
	I0425 20:03:53.026301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:53.026313   71966 main.go:141] libmachine: (embed-certs-512173) DBG | About to run SSH command:
	I0425 20:03:53.026325   71966 main.go:141] libmachine: (embed-certs-512173) DBG | exit 0
	I0425 20:03:53.158487   71966 main.go:141] libmachine: (embed-certs-512173) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:53.158846   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetConfigRaw
	I0425 20:03:53.159567   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.161881   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162200   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.162257   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162492   71966 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/config.json ...
	I0425 20:03:53.162658   71966 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:53.162675   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:53.162875   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.164797   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.165140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165256   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.165402   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165561   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165659   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.165815   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.165989   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.166002   71966 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:53.283185   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:53.283219   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283455   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:03:53.283480   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283690   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.286427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.286843   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286969   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.287164   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287350   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.287641   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.287881   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.287904   71966 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-512173 && echo "embed-certs-512173" | sudo tee /etc/hostname
	I0425 20:03:53.423037   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-512173
	
	I0425 20:03:53.423067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.425749   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.426140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426329   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.426501   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426640   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426747   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.426866   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.427015   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.427083   71966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-512173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-512173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-512173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:53.553687   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:53.553715   71966 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:53.553749   71966 buildroot.go:174] setting up certificates
	I0425 20:03:53.553758   71966 provision.go:84] configureAuth start
	I0425 20:03:53.553775   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.554053   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.556655   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.556995   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.557034   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.557121   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.559341   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559692   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.559718   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559897   71966 provision.go:143] copyHostCerts
	I0425 20:03:53.559970   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:53.559984   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:53.560049   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:53.560129   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:53.560136   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:53.560155   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:53.560203   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:53.560214   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:53.560233   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:53.560278   71966 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-512173 san=[127.0.0.1 192.168.50.7 embed-certs-512173 localhost minikube]
	I0425 20:03:53.621714   71966 provision.go:177] copyRemoteCerts
	I0425 20:03:53.621777   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:53.621804   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.624556   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.624883   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.624914   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.625128   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.625324   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.625458   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.625602   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:53.715477   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:03:53.743782   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:53.771468   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:53.798701   71966 provision.go:87] duration metric: took 244.92871ms to configureAuth
	I0425 20:03:53.798726   71966 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:53.798922   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:53.798991   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.801607   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.801946   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.801972   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.802187   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.802373   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802628   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.802833   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.802986   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.803000   71966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:54.117164   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:54.117193   71966 machine.go:97] duration metric: took 954.522384ms to provisionDockerMachine
	I0425 20:03:54.117207   71966 start.go:293] postStartSetup for "embed-certs-512173" (driver="kvm2")
	I0425 20:03:54.117219   71966 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:54.117238   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.117558   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:54.117591   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.120060   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.120454   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120575   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.120761   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.120891   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.121002   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.209919   71966 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:54.215633   71966 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:54.215663   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:54.215747   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:54.215860   71966 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:54.215996   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:54.227250   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:54.257169   71966 start.go:296] duration metric: took 139.949813ms for postStartSetup
	I0425 20:03:54.257212   71966 fix.go:56] duration metric: took 22.069363419s for fixHost
	I0425 20:03:54.257237   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.260255   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260588   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.260613   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260731   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.260928   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261099   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261266   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.261447   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:54.261644   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:54.261655   71966 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:54.376222   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075434.352338373
	
	I0425 20:03:54.376245   71966 fix.go:216] guest clock: 1714075434.352338373
	I0425 20:03:54.376255   71966 fix.go:229] Guest: 2024-04-25 20:03:54.352338373 +0000 UTC Remote: 2024-04-25 20:03:54.257217658 +0000 UTC m=+368.446046405 (delta=95.120715ms)
	I0425 20:03:54.376287   71966 fix.go:200] guest clock delta is within tolerance: 95.120715ms
	I0425 20:03:54.376295   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 22.188484297s
	I0425 20:03:54.376317   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.376600   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:54.379217   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379646   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.379678   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379869   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380436   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380633   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380729   71966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:54.380779   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.380857   71966 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:54.380880   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.383698   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384081   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384283   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384471   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.384610   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.384647   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384683   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384781   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.384821   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384982   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.385131   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.385330   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.468506   71966 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:54.493995   71966 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:54.642719   71966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:54.649565   71966 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:54.649632   71966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:54.667526   71966 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:54.667546   71966 start.go:494] detecting cgroup driver to use...
	I0425 20:03:54.667596   71966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:54.685384   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:54.701852   71966 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:54.701905   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:54.718559   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:54.734874   71966 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:54.858325   71966 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:55.045158   71966 docker.go:233] disabling docker service ...
	I0425 20:03:55.045219   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:55.061668   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:55.076486   71966 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:55.207287   71966 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:55.352537   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:55.369470   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:55.392638   71966 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:55.392718   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.404590   71966 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:55.404655   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.416129   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.427176   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.438632   71966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:55.450725   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.462912   71966 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.485340   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.498134   71966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:55.508378   71966 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:55.508451   71966 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:55.523073   71966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:55.533901   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:55.666845   71966 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:55.828131   71966 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:55.828199   71966 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:55.833768   71966 start.go:562] Will wait 60s for crictl version
	I0425 20:03:55.833824   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:03:55.838000   71966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:55.881652   71966 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:55.881753   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.917675   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.953046   71966 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:52.884447   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:54.884538   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.707459   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.208241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.707431   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.207538   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.707289   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.207319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.707625   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.207562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.708324   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:57.207348   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.373713   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.374476   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:55.954484   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:55.957214   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957611   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:55.957638   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957832   71966 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:55.962420   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:55.976512   71966 kubeadm.go:877] updating cluster {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:55.976626   71966 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:55.976694   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:56.019881   71966 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:56.019942   71966 ssh_runner.go:195] Run: which lz4
	I0425 20:03:56.024524   71966 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:56.029297   71966 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:56.029339   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:57.736602   71966 crio.go:462] duration metric: took 1.712117844s to copy over tarball
	I0425 20:03:57.736666   71966 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:04:00.331696   71966 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.594977915s)
	I0425 20:04:00.331739   71966 crio.go:469] duration metric: took 2.595109768s to extract the tarball
	I0425 20:04:00.331751   71966 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:04:00.375437   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:04:00.430963   71966 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:04:00.430987   71966 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:04:00.430994   71966 kubeadm.go:928] updating node { 192.168.50.7 8443 v1.30.0 crio true true} ...
	I0425 20:04:00.431081   71966 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-512173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:04:00.431154   71966 ssh_runner.go:195] Run: crio config
	I0425 20:04:00.487082   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:00.487106   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:00.487117   71966 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:04:00.487135   71966 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-512173 NodeName:embed-certs-512173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:04:00.487306   71966 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-512173"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:04:00.487378   71966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:04:00.498819   71966 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:04:00.498881   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:04:00.509212   71966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0425 20:04:00.527703   71966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:04:00.546867   71966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0425 20:04:00.566302   71966 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0425 20:04:00.570629   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:04:00.584123   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:00.717589   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:00.743108   71966 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173 for IP: 192.168.50.7
	I0425 20:04:00.743173   71966 certs.go:194] generating shared ca certs ...
	I0425 20:04:00.743201   71966 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:00.743397   71966 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:04:00.743462   71966 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:04:00.743480   71966 certs.go:256] generating profile certs ...
	I0425 20:04:00.743644   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/client.key
	I0425 20:04:00.743729   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key.4a0c231f
	I0425 20:04:00.743789   71966 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key
	I0425 20:04:00.743964   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:04:00.744019   71966 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:04:00.744033   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:04:00.744064   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:04:00.744093   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:04:00.744117   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:04:00.744158   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:04:00.745130   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:04:00.797856   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:04:00.848631   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:56.885355   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:58.885857   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.707868   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.208319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.207410   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.707562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.208006   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.708245   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.208178   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.707239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:02.207926   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.873851   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.372919   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:00.877499   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:04:01.210716   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0425 20:04:01.239562   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:04:01.267356   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:04:01.295649   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:04:01.323739   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:04:01.350440   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:04:01.379693   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:04:01.409347   71966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:04:01.429857   71966 ssh_runner.go:195] Run: openssl version
	I0425 20:04:01.437636   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:04:01.449656   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455022   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455074   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.461442   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:04:01.473323   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:04:01.485988   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491661   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491719   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.498567   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:04:01.510983   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:04:01.523098   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528619   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528667   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.535129   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:04:01.546668   71966 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:04:01.552076   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:04:01.558928   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:04:01.566406   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:04:01.574761   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:04:01.581250   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:04:01.588506   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:04:01.594844   71966 kubeadm.go:391] StartCluster: {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:04:01.594917   71966 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:04:01.594978   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.648050   71966 cri.go:89] found id: ""
	I0425 20:04:01.648155   71966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:04:01.664291   71966 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:04:01.664318   71966 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:04:01.664325   71966 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:04:01.664387   71966 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:04:01.678686   71966 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:04:01.680096   71966 kubeconfig.go:125] found "embed-certs-512173" server: "https://192.168.50.7:8443"
	I0425 20:04:01.682375   71966 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:04:01.699073   71966 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0425 20:04:01.699109   71966 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:04:01.699122   71966 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:04:01.699190   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.744556   71966 cri.go:89] found id: ""
	I0425 20:04:01.744633   71966 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:04:01.767121   71966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:04:01.778499   71966 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:04:01.778517   71966 kubeadm.go:156] found existing configuration files:
	
	I0425 20:04:01.778575   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:04:01.789171   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:04:01.789242   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:04:01.800000   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:04:01.811015   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:04:01.811078   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:04:01.821752   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.832900   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:04:01.832962   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.844058   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:04:01.854774   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:04:01.854824   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:04:01.866086   71966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:04:01.879229   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.180778   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.971467   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.202841   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.286951   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.412260   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:04:03.412375   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.913176   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.413418   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.443763   71966 api_server.go:72] duration metric: took 1.031501246s to wait for apiserver process to appear ...
	I0425 20:04:04.443796   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:04:04.443816   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:04.444334   71966 api_server.go:269] stopped: https://192.168.50.7:8443/healthz: Get "https://192.168.50.7:8443/healthz": dial tcp 192.168.50.7:8443: connect: connection refused
	I0425 20:04:04.943937   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:01.384590   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:03.885859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.707796   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.207913   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.708267   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.207491   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.707894   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.207346   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.707801   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.208283   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.707342   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:07.208190   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.381611   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:06.875270   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.463721   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.463767   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.463785   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.479254   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.479283   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.944812   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.949683   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:07.949710   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.444237   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.451663   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.451706   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.944231   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.949165   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.949194   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.444776   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.449703   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.449732   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.943865   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.948474   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.948509   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.444040   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.448740   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:10.448781   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.944487   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.950181   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:04:10.957455   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:04:10.957479   71966 api_server.go:131] duration metric: took 6.513676295s to wait for apiserver health ...
	I0425 20:04:10.957487   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:10.957496   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:10.959196   71966 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:04:06.384595   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:08.883972   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.707466   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.207370   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.707951   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.207604   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.708057   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.207422   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.707391   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.207510   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.707828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:12.207519   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.960795   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:04:10.977005   71966 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:04:11.001393   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:04:11.021408   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:04:11.021439   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:04:11.021453   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:04:11.021466   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:04:11.021478   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:04:11.021495   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:04:11.021502   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:04:11.021513   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:04:11.021521   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:04:11.021533   71966 system_pods.go:74] duration metric: took 20.120592ms to wait for pod list to return data ...
	I0425 20:04:11.021540   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:04:11.025328   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:04:11.025360   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:04:11.025374   71966 node_conditions.go:105] duration metric: took 3.826846ms to run NodePressure ...
	I0425 20:04:11.025394   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:11.304673   71966 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309061   71966 kubeadm.go:733] kubelet initialised
	I0425 20:04:11.309082   71966 kubeadm.go:734] duration metric: took 4.385794ms waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309089   71966 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:11.314583   71966 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.319490   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319515   71966 pod_ready.go:81] duration metric: took 4.900118ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.319524   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319534   71966 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.324084   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324101   71966 pod_ready.go:81] duration metric: took 4.557199ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.324108   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324113   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.328151   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328167   71966 pod_ready.go:81] duration metric: took 4.047894ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.328174   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328184   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.404944   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.404982   71966 pod_ready.go:81] duration metric: took 76.789573ms for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.404997   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.405006   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.805191   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805221   71966 pod_ready.go:81] duration metric: took 400.202708ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.805238   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805248   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.205817   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205847   71966 pod_ready.go:81] duration metric: took 400.591033ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.205858   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205866   71966 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.605705   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605736   71966 pod_ready.go:81] duration metric: took 399.849241ms for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.605745   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605754   71966 pod_ready.go:38] duration metric: took 1.29665644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:12.605776   71966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:04:12.620368   71966 ops.go:34] apiserver oom_adj: -16
	I0425 20:04:12.620397   71966 kubeadm.go:591] duration metric: took 10.956065292s to restartPrimaryControlPlane
	I0425 20:04:12.620405   71966 kubeadm.go:393] duration metric: took 11.025567867s to StartCluster
	I0425 20:04:12.620419   71966 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.620492   71966 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:04:12.623272   71966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.623577   71966 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:04:12.625335   71966 out.go:177] * Verifying Kubernetes components...
	I0425 20:04:12.623608   71966 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:04:12.623775   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:04:12.626619   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:12.626625   71966 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-512173"
	I0425 20:04:12.626642   71966 addons.go:69] Setting metrics-server=true in profile "embed-certs-512173"
	I0425 20:04:12.626664   71966 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-512173"
	W0425 20:04:12.626674   71966 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:04:12.626681   71966 addons.go:234] Setting addon metrics-server=true in "embed-certs-512173"
	W0425 20:04:12.626690   71966 addons.go:243] addon metrics-server should already be in state true
	I0425 20:04:12.626623   71966 addons.go:69] Setting default-storageclass=true in profile "embed-certs-512173"
	I0425 20:04:12.626709   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626714   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626718   71966 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-512173"
	I0425 20:04:12.626985   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627013   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627020   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627035   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627088   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627130   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.642680   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0425 20:04:12.642798   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0425 20:04:12.642972   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0425 20:04:12.643182   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643288   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643418   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643671   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643696   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643871   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643884   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643893   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643915   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.644227   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644235   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644403   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.644431   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644819   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.644942   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.644980   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.645022   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.647992   71966 addons.go:234] Setting addon default-storageclass=true in "embed-certs-512173"
	W0425 20:04:12.648011   71966 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:04:12.648045   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.648393   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.648429   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.660989   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41421
	I0425 20:04:12.661534   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.662561   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.662592   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.662614   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0425 20:04:12.662804   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0425 20:04:12.662947   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663016   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663116   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.663173   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663515   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663547   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663585   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663604   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663882   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663920   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.664096   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.664487   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.664506   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.665031   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.667087   71966 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:04:12.668326   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:04:12.668343   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:04:12.668361   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.666460   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.669907   71966 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:04:09.373628   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:11.376301   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.671391   71966 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.671411   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:04:12.671427   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.671566   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672113   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.672132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672233   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.672353   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.672439   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.672525   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.674511   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.674926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.674951   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.675178   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.675357   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.675505   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.675662   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.683720   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0425 20:04:12.684195   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.684736   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.684755   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.685100   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.685282   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.687009   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.687257   71966 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.687277   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:04:12.687325   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.689958   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690356   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.690374   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690446   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.690655   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.690841   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.690989   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.846840   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:12.865045   71966 node_ready.go:35] waiting up to 6m0s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:12.938848   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:04:12.938875   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:04:12.941038   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.959316   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.977813   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:04:12.977841   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:04:13.050586   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:13.050610   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:04:13.111207   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:14.253195   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.31212607s)
	I0425 20:04:14.253252   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253247   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.293897647s)
	I0425 20:04:14.253268   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253303   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253371   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253625   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253641   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253650   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253656   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253677   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253690   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253699   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253711   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253876   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254099   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253911   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253949   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253977   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254193   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.260565   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.260584   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.260830   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.260850   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.342979   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.231720554s)
	I0425 20:04:14.343042   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343349   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.343358   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343374   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343390   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343398   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343602   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343623   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343633   71966 addons.go:470] Verifying addon metrics-server=true in "embed-certs-512173"
	I0425 20:04:14.346631   71966 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:04:14.347936   71966 addons.go:505] duration metric: took 1.724328435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:04:14.869074   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.383960   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:13.384840   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.883656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.707816   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.207561   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.708264   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.207822   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.707509   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.207507   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.707899   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.208254   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.708246   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:17.207508   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.873212   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.873263   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:18.373183   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:16.870001   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:18.368960   71966 node_ready.go:49] node "embed-certs-512173" has status "Ready":"True"
	I0425 20:04:18.368991   71966 node_ready.go:38] duration metric: took 5.503919958s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:18.369003   71966 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:18.375440   71966 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380902   71966 pod_ready.go:92] pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.380920   71966 pod_ready.go:81] duration metric: took 5.456921ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380928   71966 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386330   71966 pod_ready.go:92] pod "etcd-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.386386   71966 pod_ready.go:81] duration metric: took 5.451019ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386402   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391115   71966 pod_ready.go:92] pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.391138   71966 pod_ready.go:81] duration metric: took 4.727835ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391149   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:20.398316   71966 pod_ready.go:102] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.885191   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:20.384439   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.707948   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.207953   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.707659   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.207609   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.707567   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.207989   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.707938   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.208305   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.707827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:22.207940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.374376   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.873180   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.899221   71966 pod_ready.go:92] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.899240   71966 pod_ready.go:81] duration metric: took 4.508083804s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.899250   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904904   71966 pod_ready.go:92] pod "kube-proxy-8247p" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.904922   71966 pod_ready.go:81] duration metric: took 5.665557ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904929   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910035   71966 pod_ready.go:92] pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.910051   71966 pod_ready.go:81] duration metric: took 5.116298ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910059   71966 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:24.919233   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.884480   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:25.384287   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.707381   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.207532   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.707461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.208239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.707742   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.208365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.707323   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.207485   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.707727   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:27.208332   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.373538   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.872428   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.420297   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.918808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.385722   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.883321   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.707275   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.207776   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.708096   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.207685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.708249   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.207647   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.707943   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.207471   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.707902   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:32.207582   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.872576   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.372818   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.416593   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:34.416976   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:31.884120   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:33.885341   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:35.886190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.708066   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.208090   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.707474   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.207664   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.708110   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.208160   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.707940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.207505   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.708334   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:37.207939   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.375813   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.873166   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.417945   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.916796   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.384530   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.384673   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:37.707256   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.207621   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.708237   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.208327   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.707542   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.207371   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.708300   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.207577   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.708097   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:42.207684   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.876272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:41.372217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.918223   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:43.420086   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.389390   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:44.885243   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.708257   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.207407   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.707548   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:43.707618   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:43.753656   72712 cri.go:89] found id: ""
	I0425 20:04:43.753686   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.753698   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:43.753706   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:43.753770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:43.797957   72712 cri.go:89] found id: ""
	I0425 20:04:43.797982   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.797991   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:43.797996   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:43.798051   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:43.836700   72712 cri.go:89] found id: ""
	I0425 20:04:43.836729   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.836737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:43.836742   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:43.836795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:43.883452   72712 cri.go:89] found id: ""
	I0425 20:04:43.883478   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.883486   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:43.883492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:43.883544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:43.929975   72712 cri.go:89] found id: ""
	I0425 20:04:43.930004   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.930014   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:43.930022   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:43.930089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:43.967648   72712 cri.go:89] found id: ""
	I0425 20:04:43.967681   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.967693   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:43.967701   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:43.967758   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:44.011024   72712 cri.go:89] found id: ""
	I0425 20:04:44.011048   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.011072   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:44.011078   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:44.011129   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:44.050233   72712 cri.go:89] found id: ""
	I0425 20:04:44.050263   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.050274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:44.050286   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:44.050302   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:44.196275   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:44.196307   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:44.196323   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:44.260707   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:44.260748   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:44.306051   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:44.306090   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:44.357643   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:44.357682   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:46.875982   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:46.890987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:46.891062   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:46.935855   72712 cri.go:89] found id: ""
	I0425 20:04:46.935878   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.935885   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:46.935891   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:46.935948   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:46.978634   72712 cri.go:89] found id: ""
	I0425 20:04:46.978662   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.978674   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:46.978681   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:46.978749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:47.019845   72712 cri.go:89] found id: ""
	I0425 20:04:47.019864   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.019872   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:47.019877   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:47.019933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:47.065002   72712 cri.go:89] found id: ""
	I0425 20:04:47.065040   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.065064   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:47.065072   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:47.065139   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:47.106370   72712 cri.go:89] found id: ""
	I0425 20:04:47.106404   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.106416   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:47.106423   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:47.106483   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:47.143851   72712 cri.go:89] found id: ""
	I0425 20:04:47.143874   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.143883   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:47.143888   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:47.143932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:47.186130   72712 cri.go:89] found id: ""
	I0425 20:04:47.186160   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.186168   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:47.186174   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:47.186238   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:47.228959   72712 cri.go:89] found id: ""
	I0425 20:04:47.228984   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.228992   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:47.229000   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:47.229010   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:47.299852   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:47.299893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:47.346078   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:47.346111   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:43.872670   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:46.373259   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:45.917948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.919494   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:50.420952   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.388353   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:49.884300   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.405897   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:47.405932   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:47.424426   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:47.424455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:47.506603   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.007697   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:50.023258   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:50.023333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:50.066794   72712 cri.go:89] found id: ""
	I0425 20:04:50.066827   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.066836   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:50.066842   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:50.066913   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:50.109167   72712 cri.go:89] found id: ""
	I0425 20:04:50.109200   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.109212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:50.109219   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:50.109306   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:50.151854   72712 cri.go:89] found id: ""
	I0425 20:04:50.151878   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.151886   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:50.151892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:50.151940   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:50.190600   72712 cri.go:89] found id: ""
	I0425 20:04:50.190632   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.190644   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:50.190672   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:50.190742   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:50.232851   72712 cri.go:89] found id: ""
	I0425 20:04:50.232874   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.232883   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:50.232889   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:50.232935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:50.274941   72712 cri.go:89] found id: ""
	I0425 20:04:50.274971   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.274983   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:50.274990   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:50.275069   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:50.320954   72712 cri.go:89] found id: ""
	I0425 20:04:50.320981   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.320992   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:50.320999   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:50.321068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:50.361799   72712 cri.go:89] found id: ""
	I0425 20:04:50.361829   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.361839   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:50.361847   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:50.361858   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:50.457792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.457819   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:50.457834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:50.539653   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:50.539702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:50.598740   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:50.598774   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:50.650501   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:50.650533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:48.872490   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.374484   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:52.919420   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:55.420126   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.887536   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:54.389174   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:53.167827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:53.183324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:53.183403   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:53.227598   72712 cri.go:89] found id: ""
	I0425 20:04:53.227641   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.227650   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:53.227655   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:53.227700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:53.271170   72712 cri.go:89] found id: ""
	I0425 20:04:53.271200   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.271212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:53.271220   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:53.271304   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:53.318185   72712 cri.go:89] found id: ""
	I0425 20:04:53.318233   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.318246   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:53.318255   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:53.318324   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:53.372199   72712 cri.go:89] found id: ""
	I0425 20:04:53.372228   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.372238   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:53.372244   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:53.372367   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:53.414048   72712 cri.go:89] found id: ""
	I0425 20:04:53.414080   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.414091   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:53.414099   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:53.414170   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:53.455746   72712 cri.go:89] found id: ""
	I0425 20:04:53.455806   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.455819   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:53.455827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:53.455901   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:53.497969   72712 cri.go:89] found id: ""
	I0425 20:04:53.497996   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.498004   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:53.498011   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:53.498057   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:53.543642   72712 cri.go:89] found id: ""
	I0425 20:04:53.543668   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.543675   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:53.543684   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:53.543693   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:53.596106   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:53.596144   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:53.612755   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:53.612787   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:53.693068   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:53.693089   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:53.693102   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:53.771499   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:53.771535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:56.322663   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:56.336866   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:56.336945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:56.375515   72712 cri.go:89] found id: ""
	I0425 20:04:56.375556   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.375567   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:56.375574   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:56.375641   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:56.423230   72712 cri.go:89] found id: ""
	I0425 20:04:56.423261   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.423273   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:56.423281   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:56.423366   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:56.467786   72712 cri.go:89] found id: ""
	I0425 20:04:56.467814   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.467835   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:56.467842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:56.467895   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:56.517671   72712 cri.go:89] found id: ""
	I0425 20:04:56.517696   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.517708   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:56.517715   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:56.517770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:56.558622   72712 cri.go:89] found id: ""
	I0425 20:04:56.558651   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.558662   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:56.558669   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:56.558746   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:56.601350   72712 cri.go:89] found id: ""
	I0425 20:04:56.601374   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.601382   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:56.601387   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:56.601444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:56.645892   72712 cri.go:89] found id: ""
	I0425 20:04:56.645923   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.645934   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:56.645940   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:56.646001   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:56.691619   72712 cri.go:89] found id: ""
	I0425 20:04:56.691645   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.691656   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:56.691665   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:56.691679   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:56.744854   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:56.744891   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:56.762523   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:56.762556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:56.843396   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:56.843422   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:56.843437   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:56.933785   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:56.933825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:53.872514   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.372956   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:58.373649   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:57.917208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.920979   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.884907   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.385506   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.481512   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:59.497510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:59.497588   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:59.547382   72712 cri.go:89] found id: ""
	I0425 20:04:59.547412   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.547423   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:59.547432   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:59.547486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:59.597671   72712 cri.go:89] found id: ""
	I0425 20:04:59.597699   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.597711   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:59.597717   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:59.597762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:59.641455   72712 cri.go:89] found id: ""
	I0425 20:04:59.641486   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.641497   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:59.641503   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:59.641613   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:59.685052   72712 cri.go:89] found id: ""
	I0425 20:04:59.685092   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.685104   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:59.685112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:59.685173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:59.735912   72712 cri.go:89] found id: ""
	I0425 20:04:59.735943   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.735951   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:59.735957   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:59.736025   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:59.799294   72712 cri.go:89] found id: ""
	I0425 20:04:59.799322   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.799332   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:59.799338   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:59.799395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:59.871270   72712 cri.go:89] found id: ""
	I0425 20:04:59.871297   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.871308   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:59.871315   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:59.871377   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:59.919001   72712 cri.go:89] found id: ""
	I0425 20:04:59.919091   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.919110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:59.919120   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:59.919135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:59.973458   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:59.973498   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:59.989729   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:59.989757   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:00.072887   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:00.072911   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:00.072926   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:00.153886   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:00.153921   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:00.873812   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.372969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.417960   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:04.420353   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:01.885238   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.887277   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:02.722771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:02.722831   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:02.770101   72712 cri.go:89] found id: ""
	I0425 20:05:02.770134   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.770147   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:02.770154   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:02.770224   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:02.817819   72712 cri.go:89] found id: ""
	I0425 20:05:02.817854   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.817865   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:02.817898   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:02.817963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:02.857036   72712 cri.go:89] found id: ""
	I0425 20:05:02.857066   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.857077   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:02.857085   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:02.857144   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:02.900112   72712 cri.go:89] found id: ""
	I0425 20:05:02.900145   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.900157   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:02.900164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:02.900221   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:02.941079   72712 cri.go:89] found id: ""
	I0425 20:05:02.941109   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.941116   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:02.941121   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:02.941198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:02.983458   72712 cri.go:89] found id: ""
	I0425 20:05:02.983490   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.983502   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:02.983510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:02.983574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:03.025424   72712 cri.go:89] found id: ""
	I0425 20:05:03.025451   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.025462   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:03.025469   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:03.025556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:03.065285   72712 cri.go:89] found id: ""
	I0425 20:05:03.065316   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.065328   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:03.065340   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:03.065351   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:03.121235   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:03.121267   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:03.138036   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:03.138073   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:03.213604   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:03.213638   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:03.213655   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:03.296696   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:03.296741   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.842642   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:05.859125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:05.859199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:05.906505   72712 cri.go:89] found id: ""
	I0425 20:05:05.906529   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.906537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:05.906542   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:05.906595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:05.950793   72712 cri.go:89] found id: ""
	I0425 20:05:05.950819   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.950831   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:05.950838   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:05.950902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:05.991612   72712 cri.go:89] found id: ""
	I0425 20:05:05.991644   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.991654   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:05.991661   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:05.991755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:06.032273   72712 cri.go:89] found id: ""
	I0425 20:05:06.032314   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.032326   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:06.032334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:06.032392   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:06.071802   72712 cri.go:89] found id: ""
	I0425 20:05:06.071833   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.071844   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:06.071852   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:06.071908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:06.116676   72712 cri.go:89] found id: ""
	I0425 20:05:06.116702   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.116710   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:06.116716   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:06.116759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:06.154720   72712 cri.go:89] found id: ""
	I0425 20:05:06.154753   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.154765   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:06.154771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:06.154842   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:06.196421   72712 cri.go:89] found id: ""
	I0425 20:05:06.196457   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.196469   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:06.196480   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:06.196493   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:06.251061   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:06.251122   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:06.267764   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:06.267799   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:06.345302   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:06.345334   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:06.345349   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:06.427836   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:06.427868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.873928   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.372014   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.422386   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.916659   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.384700   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.883611   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:10.885814   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.989442   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:09.004493   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:09.004551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:09.056062   72712 cri.go:89] found id: ""
	I0425 20:05:09.056086   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.056096   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:09.056101   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:09.056148   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:09.096791   72712 cri.go:89] found id: ""
	I0425 20:05:09.096817   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.096827   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:09.096834   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:09.096889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:09.134649   72712 cri.go:89] found id: ""
	I0425 20:05:09.134680   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.134691   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:09.134698   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:09.134757   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:09.175980   72712 cri.go:89] found id: ""
	I0425 20:05:09.176010   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.176021   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:09.176028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:09.176084   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:09.216263   72712 cri.go:89] found id: ""
	I0425 20:05:09.216299   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.216313   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:09.216325   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:09.216395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:09.260498   72712 cri.go:89] found id: ""
	I0425 20:05:09.260528   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.260538   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:09.260544   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:09.260603   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:09.303154   72712 cri.go:89] found id: ""
	I0425 20:05:09.303178   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.303201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:09.303209   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:09.303269   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:09.350798   72712 cri.go:89] found id: ""
	I0425 20:05:09.350829   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.350840   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:09.350852   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:09.350868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:09.405295   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:09.405332   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:09.422788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:09.422820   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:09.501819   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:09.501841   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:09.501855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:09.586938   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:09.586981   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:12.132731   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:12.148860   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:12.148935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:12.194021   72712 cri.go:89] found id: ""
	I0425 20:05:12.194051   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.194064   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:12.194072   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:12.194152   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:12.234680   72712 cri.go:89] found id: ""
	I0425 20:05:12.234710   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.234721   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:12.234728   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:12.234792   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:12.277751   72712 cri.go:89] found id: ""
	I0425 20:05:12.277783   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.277794   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:12.277802   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:12.277864   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:12.324068   72712 cri.go:89] found id: ""
	I0425 20:05:12.324100   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.324117   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:12.324125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:12.324187   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:10.374594   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.873217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:11.424208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.425980   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.387259   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.884337   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.366797   72712 cri.go:89] found id: ""
	I0425 20:05:12.366825   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.366837   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:12.366844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:12.366903   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:12.413092   72712 cri.go:89] found id: ""
	I0425 20:05:12.413120   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.413132   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:12.413139   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:12.413198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:12.461229   72712 cri.go:89] found id: ""
	I0425 20:05:12.461253   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.461262   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:12.461268   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:12.461333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:12.504646   72712 cri.go:89] found id: ""
	I0425 20:05:12.504669   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.504677   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:12.504685   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:12.504698   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:12.561630   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:12.561673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:12.578043   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:12.578069   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:12.655176   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:12.655195   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:12.655209   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:12.736323   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:12.736357   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.287503   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:15.302830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:15.302893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:15.339479   72712 cri.go:89] found id: ""
	I0425 20:05:15.339509   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.339519   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:15.339527   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:15.339589   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:15.381431   72712 cri.go:89] found id: ""
	I0425 20:05:15.381458   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.381467   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:15.381475   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:15.381537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:15.423729   72712 cri.go:89] found id: ""
	I0425 20:05:15.423755   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.423767   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:15.423774   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:15.423833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:15.464367   72712 cri.go:89] found id: ""
	I0425 20:05:15.464401   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.464413   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:15.464421   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:15.464489   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:15.508306   72712 cri.go:89] found id: ""
	I0425 20:05:15.508336   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.508348   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:15.508356   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:15.508419   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:15.548572   72712 cri.go:89] found id: ""
	I0425 20:05:15.548600   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.548610   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:15.548616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:15.548678   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:15.592885   72712 cri.go:89] found id: ""
	I0425 20:05:15.592914   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.592926   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:15.592933   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:15.592992   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:15.632817   72712 cri.go:89] found id: ""
	I0425 20:05:15.632855   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.632868   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:15.632880   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:15.632900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:15.648443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:15.648470   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:15.726167   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:15.726191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:15.726229   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:15.803028   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:15.803066   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.850519   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:15.850552   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:14.873291   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:17.372118   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.917932   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.420096   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.384555   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.885930   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.404671   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:18.422600   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:18.422663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:18.476977   72712 cri.go:89] found id: ""
	I0425 20:05:18.477001   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.477009   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:18.477021   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:18.477093   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:18.525595   72712 cri.go:89] found id: ""
	I0425 20:05:18.525631   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.525641   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:18.525648   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:18.525714   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:18.565485   72712 cri.go:89] found id: ""
	I0425 20:05:18.565513   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.565523   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:18.565531   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:18.565600   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:18.612059   72712 cri.go:89] found id: ""
	I0425 20:05:18.612096   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.612106   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:18.612112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:18.612173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:18.659407   72712 cri.go:89] found id: ""
	I0425 20:05:18.659438   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.659449   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:18.659456   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:18.659507   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:18.701065   72712 cri.go:89] found id: ""
	I0425 20:05:18.701092   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.701101   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:18.701106   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:18.701201   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:18.738234   72712 cri.go:89] found id: ""
	I0425 20:05:18.738264   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.738276   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:18.738284   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:18.738343   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:18.780460   72712 cri.go:89] found id: ""
	I0425 20:05:18.780489   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.780498   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:18.780514   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:18.780526   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:18.834345   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:18.834378   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:18.850006   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:18.850033   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:18.932146   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:18.932171   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:18.932185   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:19.015036   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:19.015068   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:21.568250   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:21.582519   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:21.582595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:21.622886   72712 cri.go:89] found id: ""
	I0425 20:05:21.622913   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.622920   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:21.622925   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:21.622974   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:21.664832   72712 cri.go:89] found id: ""
	I0425 20:05:21.664860   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.664874   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:21.664882   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:21.664950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:21.703801   72712 cri.go:89] found id: ""
	I0425 20:05:21.703829   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.703843   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:21.703850   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:21.703911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:21.741502   72712 cri.go:89] found id: ""
	I0425 20:05:21.741540   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.741549   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:21.741555   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:21.741612   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:21.783715   72712 cri.go:89] found id: ""
	I0425 20:05:21.783745   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.783754   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:21.783759   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:21.783803   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:21.822806   72712 cri.go:89] found id: ""
	I0425 20:05:21.822842   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.822851   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:21.822856   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:21.822915   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:21.864996   72712 cri.go:89] found id: ""
	I0425 20:05:21.865020   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.865030   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:21.865037   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:21.865092   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:21.907533   72712 cri.go:89] found id: ""
	I0425 20:05:21.907563   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.907575   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:21.907585   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:21.907601   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:21.964226   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:21.964260   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:21.980096   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:21.980123   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:22.059516   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:22.059539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:22.059566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:22.136752   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:22.136784   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:19.373290   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:21.873377   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.916720   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:22.917156   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.918191   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:23.384566   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:25.885793   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.682139   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:24.697495   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:24.697564   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:24.739725   72712 cri.go:89] found id: ""
	I0425 20:05:24.739750   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.739760   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:24.739766   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:24.739824   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:24.777455   72712 cri.go:89] found id: ""
	I0425 20:05:24.777485   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.777497   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:24.777504   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:24.777566   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:24.821729   72712 cri.go:89] found id: ""
	I0425 20:05:24.821761   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.821774   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:24.821782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:24.821845   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:24.861745   72712 cri.go:89] found id: ""
	I0425 20:05:24.861773   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.861784   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:24.861791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:24.861851   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:24.903441   72712 cri.go:89] found id: ""
	I0425 20:05:24.903470   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.903479   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:24.903486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:24.903544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:24.943589   72712 cri.go:89] found id: ""
	I0425 20:05:24.943618   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.943629   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:24.943637   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:24.943717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:24.983629   72712 cri.go:89] found id: ""
	I0425 20:05:24.983661   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.983672   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:24.983680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:24.983739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:25.022413   72712 cri.go:89] found id: ""
	I0425 20:05:25.022441   72712 logs.go:276] 0 containers: []
	W0425 20:05:25.022451   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:25.022462   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:25.022477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:25.077402   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:25.077438   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:25.094488   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:25.094517   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:25.171485   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:25.171515   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:25.171535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:25.251131   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:25.251166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:24.373762   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:26.873969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.420395   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:29.420994   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:28.384247   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:30.883795   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.797359   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:27.813601   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:27.813659   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:27.854017   72712 cri.go:89] found id: ""
	I0425 20:05:27.854051   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.854061   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:27.854066   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:27.854117   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:27.900425   72712 cri.go:89] found id: ""
	I0425 20:05:27.900451   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.900461   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:27.900468   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:27.900531   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:27.940064   72712 cri.go:89] found id: ""
	I0425 20:05:27.940096   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.940107   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:27.940114   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:27.940174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:27.979363   72712 cri.go:89] found id: ""
	I0425 20:05:27.979385   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.979393   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:27.979399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:27.979442   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:28.019702   72712 cri.go:89] found id: ""
	I0425 20:05:28.019723   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.019731   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:28.019736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:28.019798   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:28.058711   72712 cri.go:89] found id: ""
	I0425 20:05:28.058740   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.058748   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:28.058755   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:28.058810   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:28.104465   72712 cri.go:89] found id: ""
	I0425 20:05:28.104495   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.104507   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:28.104515   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:28.104577   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:28.142399   72712 cri.go:89] found id: ""
	I0425 20:05:28.142431   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.142440   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:28.142449   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:28.142460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:28.222763   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:28.222786   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:28.222801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:28.299797   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:28.299838   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:28.366569   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:28.366594   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:28.424581   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:28.424628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:30.942526   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:30.957400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:30.957482   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:30.996931   72712 cri.go:89] found id: ""
	I0425 20:05:30.996958   72712 logs.go:276] 0 containers: []
	W0425 20:05:30.996967   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:30.996974   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:30.997029   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:31.035673   72712 cri.go:89] found id: ""
	I0425 20:05:31.035700   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.035712   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:31.035719   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:31.035782   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:31.075783   72712 cri.go:89] found id: ""
	I0425 20:05:31.075809   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.075820   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:31.075826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:31.075886   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:31.114229   72712 cri.go:89] found id: ""
	I0425 20:05:31.114257   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.114267   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:31.114274   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:31.114333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:31.155385   72712 cri.go:89] found id: ""
	I0425 20:05:31.155409   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.155419   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:31.155427   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:31.155486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:31.193772   72712 cri.go:89] found id: ""
	I0425 20:05:31.193804   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.193815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:31.193823   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:31.193878   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:31.233886   72712 cri.go:89] found id: ""
	I0425 20:05:31.233909   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.233917   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:31.233923   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:31.233967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:31.273427   72712 cri.go:89] found id: ""
	I0425 20:05:31.273455   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.273465   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:31.273476   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:31.273491   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:31.354429   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:31.354462   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:31.406018   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:31.406047   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:31.460972   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:31.461007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:31.477485   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:31.477513   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:31.551616   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:29.371357   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.373007   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.421948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.424866   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.384577   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.884780   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:34.052808   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:34.068068   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:34.068158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:34.120984   72712 cri.go:89] found id: ""
	I0425 20:05:34.121016   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.121024   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:34.121032   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:34.121082   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:34.160646   72712 cri.go:89] found id: ""
	I0425 20:05:34.160676   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.160687   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:34.160694   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:34.160752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:34.202641   72712 cri.go:89] found id: ""
	I0425 20:05:34.202665   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.202671   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:34.202677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:34.202733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:34.244352   72712 cri.go:89] found id: ""
	I0425 20:05:34.244379   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.244391   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:34.244400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:34.244460   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:34.285858   72712 cri.go:89] found id: ""
	I0425 20:05:34.285885   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.285896   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:34.285904   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:34.285956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:34.323634   72712 cri.go:89] found id: ""
	I0425 20:05:34.323662   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.323673   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:34.323681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:34.323739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:34.365230   72712 cri.go:89] found id: ""
	I0425 20:05:34.365256   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.365272   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:34.365280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:34.365339   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:34.409329   72712 cri.go:89] found id: ""
	I0425 20:05:34.409354   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.409365   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:34.409376   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:34.409390   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:34.464575   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:34.464606   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:34.480244   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:34.480270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:34.560204   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:34.560224   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:34.560236   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:34.640152   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:34.640187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:37.189992   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:37.204683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:37.204786   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:37.245857   72712 cri.go:89] found id: ""
	I0425 20:05:37.245891   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.245903   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:37.245910   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:37.245969   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:37.284668   72712 cri.go:89] found id: ""
	I0425 20:05:37.284696   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.284704   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:37.284710   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:37.284762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:37.324349   72712 cri.go:89] found id: ""
	I0425 20:05:37.324379   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.324391   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:37.324399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:37.324461   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:33.872836   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.873214   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.373278   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.917308   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.419746   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.383933   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.385166   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:37.361764   72712 cri.go:89] found id: ""
	I0425 20:05:37.361787   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.361800   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:37.361811   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:37.361857   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:37.404331   72712 cri.go:89] found id: ""
	I0425 20:05:37.404353   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.404360   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:37.404366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:37.404430   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:37.445284   72712 cri.go:89] found id: ""
	I0425 20:05:37.445316   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.445327   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:37.445334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:37.445395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:37.483806   72712 cri.go:89] found id: ""
	I0425 20:05:37.483828   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.483837   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:37.483843   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:37.483888   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:37.524649   72712 cri.go:89] found id: ""
	I0425 20:05:37.524673   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.524680   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:37.524689   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:37.524701   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:37.581521   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:37.581553   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:37.598459   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:37.598487   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:37.671236   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:37.671256   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:37.671272   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:37.750517   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:37.750556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.293743   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:40.310344   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:40.310426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:40.356157   72712 cri.go:89] found id: ""
	I0425 20:05:40.356198   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.356208   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:40.356215   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:40.356277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:40.397857   72712 cri.go:89] found id: ""
	I0425 20:05:40.397886   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.397895   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:40.397902   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:40.397964   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:40.445034   72712 cri.go:89] found id: ""
	I0425 20:05:40.445057   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.445065   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:40.445071   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:40.445126   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:40.493744   72712 cri.go:89] found id: ""
	I0425 20:05:40.493773   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.493783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:40.493797   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:40.493856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:40.550546   72712 cri.go:89] found id: ""
	I0425 20:05:40.550572   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.550580   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:40.550587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:40.550654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:40.605122   72712 cri.go:89] found id: ""
	I0425 20:05:40.605153   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.605164   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:40.605172   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:40.605232   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:40.675713   72712 cri.go:89] found id: ""
	I0425 20:05:40.675745   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.675755   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:40.675769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:40.675828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:40.716064   72712 cri.go:89] found id: ""
	I0425 20:05:40.716093   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.716101   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:40.716109   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:40.716120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:40.781395   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:40.781441   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:40.797597   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:40.797628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:40.880931   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:40.880956   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:40.880971   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:40.970770   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:40.970800   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.373398   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.873163   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.918560   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.417610   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:45.420963   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.883556   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:44.883719   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.520389   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:43.537668   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:43.537729   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:43.578137   72712 cri.go:89] found id: ""
	I0425 20:05:43.578166   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.578175   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:43.578180   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:43.578247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:43.617428   72712 cri.go:89] found id: ""
	I0425 20:05:43.617454   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.617462   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:43.617466   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:43.617519   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:43.655401   72712 cri.go:89] found id: ""
	I0425 20:05:43.655431   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.655443   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:43.655450   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:43.655514   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:43.695183   72712 cri.go:89] found id: ""
	I0425 20:05:43.695212   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.695230   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:43.695238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:43.695316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:43.735056   72712 cri.go:89] found id: ""
	I0425 20:05:43.735086   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.735098   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:43.735104   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:43.735162   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:43.774761   72712 cri.go:89] found id: ""
	I0425 20:05:43.774789   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.774799   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:43.774830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:43.774889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:43.819102   72712 cri.go:89] found id: ""
	I0425 20:05:43.819128   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.819138   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:43.819146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:43.819206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:43.858235   72712 cri.go:89] found id: ""
	I0425 20:05:43.858267   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.858278   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:43.858289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:43.858303   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:43.940756   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:43.940794   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:43.985878   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:43.985925   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:44.040177   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:44.040207   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:44.055912   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:44.055942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:44.143724   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:46.643923   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:46.658863   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:46.658941   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:46.697826   72712 cri.go:89] found id: ""
	I0425 20:05:46.697850   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.697858   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:46.697884   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:46.697947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:46.739850   72712 cri.go:89] found id: ""
	I0425 20:05:46.739877   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.739888   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:46.739897   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:46.739955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:46.781212   72712 cri.go:89] found id: ""
	I0425 20:05:46.781241   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.781256   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:46.781262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:46.781321   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:46.826005   72712 cri.go:89] found id: ""
	I0425 20:05:46.826036   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.826047   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:46.826055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:46.826109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:46.865428   72712 cri.go:89] found id: ""
	I0425 20:05:46.865456   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.865465   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:46.865472   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:46.865522   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:46.914860   72712 cri.go:89] found id: ""
	I0425 20:05:46.914887   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.914897   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:46.914907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:46.914968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:46.955323   72712 cri.go:89] found id: ""
	I0425 20:05:46.955355   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.955365   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:46.955373   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:46.955436   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:46.999369   72712 cri.go:89] found id: ""
	I0425 20:05:46.999396   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.999408   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:46.999419   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:46.999464   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:47.013865   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:47.013893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:47.094725   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:47.094755   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:47.094771   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:47.178380   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:47.178426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:47.227217   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:47.227249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:45.375272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.872640   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.917579   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.918001   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:46.884746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:48.884818   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.780217   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:49.795690   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:49.795760   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:49.834909   72712 cri.go:89] found id: ""
	I0425 20:05:49.834935   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.834943   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:49.834951   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:49.835004   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:49.872717   72712 cri.go:89] found id: ""
	I0425 20:05:49.872747   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.872755   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:49.872762   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:49.872807   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:49.919348   72712 cri.go:89] found id: ""
	I0425 20:05:49.919376   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.919387   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:49.919395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:49.919465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:49.959673   72712 cri.go:89] found id: ""
	I0425 20:05:49.959705   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.959716   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:49.959728   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:49.959796   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:49.999276   72712 cri.go:89] found id: ""
	I0425 20:05:49.999299   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.999306   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:49.999312   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:49.999361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:50.037426   72712 cri.go:89] found id: ""
	I0425 20:05:50.037454   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.037461   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:50.037466   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:50.037510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:50.080666   72712 cri.go:89] found id: ""
	I0425 20:05:50.080695   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.080703   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:50.080719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:50.080776   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:50.126065   72712 cri.go:89] found id: ""
	I0425 20:05:50.126111   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.126123   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:50.126134   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:50.126148   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:50.140778   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:50.140805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:50.213282   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:50.213308   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:50.213320   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:50.293798   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:50.293832   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:50.336823   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:50.336859   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:49.873685   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.372830   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.919781   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:54.417518   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.382698   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:53.392894   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:55.884231   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.892579   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:52.909556   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:52.909629   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:52.948098   72712 cri.go:89] found id: ""
	I0425 20:05:52.948127   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.948138   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:52.948146   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:52.948206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:52.988813   72712 cri.go:89] found id: ""
	I0425 20:05:52.988840   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.988848   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:52.988853   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:52.988898   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:53.032181   72712 cri.go:89] found id: ""
	I0425 20:05:53.032211   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.032222   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:53.032230   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:53.032288   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:53.075496   72712 cri.go:89] found id: ""
	I0425 20:05:53.075528   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.075538   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:53.075543   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:53.075599   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:53.119037   72712 cri.go:89] found id: ""
	I0425 20:05:53.119070   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.119082   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:53.119095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:53.119158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:53.158276   72712 cri.go:89] found id: ""
	I0425 20:05:53.158303   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.158314   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:53.158321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:53.158381   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:53.196168   72712 cri.go:89] found id: ""
	I0425 20:05:53.196199   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.196211   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:53.196219   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:53.196277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:53.235212   72712 cri.go:89] found id: ""
	I0425 20:05:53.235235   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.235243   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:53.235250   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:53.235261   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:53.290435   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:53.290474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:53.306351   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:53.306380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:53.388623   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:53.388652   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:53.388666   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:53.480388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:53.480426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:56.027403   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:56.042683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:56.042755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:56.083672   72712 cri.go:89] found id: ""
	I0425 20:05:56.083706   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.083718   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:56.083725   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:56.083790   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:56.124071   72712 cri.go:89] found id: ""
	I0425 20:05:56.124105   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.124126   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:56.124134   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:56.124200   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:56.166692   72712 cri.go:89] found id: ""
	I0425 20:05:56.166724   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.166737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:56.166744   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:56.166808   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:56.203833   72712 cri.go:89] found id: ""
	I0425 20:05:56.203871   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.203884   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:56.203892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:56.203950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:56.242277   72712 cri.go:89] found id: ""
	I0425 20:05:56.242319   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.242341   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:56.242349   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:56.242416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:56.281697   72712 cri.go:89] found id: ""
	I0425 20:05:56.281726   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.281733   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:56.281739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:56.281812   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:56.322190   72712 cri.go:89] found id: ""
	I0425 20:05:56.322233   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.322243   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:56.322248   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:56.322310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:56.364831   72712 cri.go:89] found id: ""
	I0425 20:05:56.364853   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.364864   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:56.364875   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:56.364889   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:56.422824   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:56.422856   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:56.437619   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:56.437641   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:56.512938   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:56.512961   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:56.512977   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:56.598670   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:56.598708   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:54.872566   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.873184   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.917352   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.421645   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:58.383740   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:00.384113   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.150322   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:59.166883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:59.166956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:59.205086   72712 cri.go:89] found id: ""
	I0425 20:05:59.205112   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.205121   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:59.205126   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:59.205199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:59.253430   72712 cri.go:89] found id: ""
	I0425 20:05:59.253458   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.253469   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:59.253478   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:59.253539   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:59.293691   72712 cri.go:89] found id: ""
	I0425 20:05:59.293719   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.293731   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:59.293738   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:59.293801   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:59.331580   72712 cri.go:89] found id: ""
	I0425 20:05:59.331604   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.331613   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:59.331619   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:59.331663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:59.369985   72712 cri.go:89] found id: ""
	I0425 20:05:59.370012   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.370023   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:59.370031   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:59.370095   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:59.411636   72712 cri.go:89] found id: ""
	I0425 20:05:59.411662   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.411670   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:59.411676   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:59.411733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:59.454735   72712 cri.go:89] found id: ""
	I0425 20:05:59.454762   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.454774   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:59.454782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:59.454839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:59.497664   72712 cri.go:89] found id: ""
	I0425 20:05:59.497694   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.497704   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:59.497715   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:59.497731   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:59.556694   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:59.556728   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:59.572160   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:59.572187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:59.649040   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:59.649063   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:59.649083   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:59.727941   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:59.727975   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.275513   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:02.290486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:02.290557   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:02.332217   72712 cri.go:89] found id: ""
	I0425 20:06:02.332255   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.332273   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:02.332281   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:02.332357   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:58.873314   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.373601   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.916947   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.418479   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.384744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.885488   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.373346   72712 cri.go:89] found id: ""
	I0425 20:06:02.373370   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.373377   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:02.373382   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:02.373439   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:02.415835   72712 cri.go:89] found id: ""
	I0425 20:06:02.415861   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.415873   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:02.415881   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:02.415939   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:02.458876   72712 cri.go:89] found id: ""
	I0425 20:06:02.458905   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.458917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:02.458926   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:02.459008   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:02.502092   72712 cri.go:89] found id: ""
	I0425 20:06:02.502127   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.502138   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:02.502146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:02.502235   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:02.546357   72712 cri.go:89] found id: ""
	I0425 20:06:02.546383   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.546393   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:02.546399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:02.546459   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:02.586842   72712 cri.go:89] found id: ""
	I0425 20:06:02.586870   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.586881   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:02.586887   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:02.586932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:02.629305   72712 cri.go:89] found id: ""
	I0425 20:06:02.629339   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.629350   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:02.629360   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:02.629374   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.676583   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:02.676626   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:02.731790   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:02.731825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:02.747473   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:02.747499   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:02.824265   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:02.824289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:02.824304   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.408968   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:05.423645   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:05.423713   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:05.467402   72712 cri.go:89] found id: ""
	I0425 20:06:05.467425   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.467434   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:05.467445   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:05.467510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:05.503131   72712 cri.go:89] found id: ""
	I0425 20:06:05.503153   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.503161   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:05.503166   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:05.503216   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:05.545694   72712 cri.go:89] found id: ""
	I0425 20:06:05.545721   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.545732   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:05.545739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:05.545804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:05.585879   72712 cri.go:89] found id: ""
	I0425 20:06:05.585905   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.585912   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:05.585917   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:05.585963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:05.625520   72712 cri.go:89] found id: ""
	I0425 20:06:05.625549   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.625560   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:05.625567   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:05.625620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:05.664306   72712 cri.go:89] found id: ""
	I0425 20:06:05.664335   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.664345   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:05.664364   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:05.664437   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:05.705353   72712 cri.go:89] found id: ""
	I0425 20:06:05.705385   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.705397   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:05.705405   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:05.705468   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:05.743935   72712 cri.go:89] found id: ""
	I0425 20:06:05.743968   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.743977   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:05.743986   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:05.743997   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:05.801190   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:05.801234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:05.817046   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:05.817074   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:05.899413   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:05.899443   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:05.899458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.986303   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:05.986336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:03.872605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:05.876833   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.373392   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.916334   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.917480   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.887784   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:09.387085   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.531748   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:08.550667   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:08.550749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:08.594062   72712 cri.go:89] found id: ""
	I0425 20:06:08.594093   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.594102   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:08.594108   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:08.594163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:08.635823   72712 cri.go:89] found id: ""
	I0425 20:06:08.635861   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.635872   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:08.635880   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:08.635944   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:08.675338   72712 cri.go:89] found id: ""
	I0425 20:06:08.675383   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.675395   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:08.675402   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:08.675463   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:08.715971   72712 cri.go:89] found id: ""
	I0425 20:06:08.716001   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.716012   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:08.716019   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:08.716088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:08.758565   72712 cri.go:89] found id: ""
	I0425 20:06:08.758597   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.758608   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:08.758616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:08.758683   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:08.800179   72712 cri.go:89] found id: ""
	I0425 20:06:08.800207   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.800218   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:08.800226   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:08.800286   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:08.854603   72712 cri.go:89] found id: ""
	I0425 20:06:08.854639   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.854651   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:08.854659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:08.854724   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:08.904115   72712 cri.go:89] found id: ""
	I0425 20:06:08.904141   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.904152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:08.904162   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:08.904177   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:08.921826   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:08.921855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:09.003667   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:09.003687   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:09.003699   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:09.086301   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:09.086346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:09.138478   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:09.138516   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:11.704402   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:11.721810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:11.721902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:11.768790   72712 cri.go:89] found id: ""
	I0425 20:06:11.768829   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.768850   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:11.768858   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:11.768928   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:11.813543   72712 cri.go:89] found id: ""
	I0425 20:06:11.813576   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.813588   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:11.813595   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:11.813654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:11.853930   72712 cri.go:89] found id: ""
	I0425 20:06:11.853962   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.853972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:11.853980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:11.854044   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:11.900808   72712 cri.go:89] found id: ""
	I0425 20:06:11.900843   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.900853   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:11.900861   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:11.900919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:11.948850   72712 cri.go:89] found id: ""
	I0425 20:06:11.948876   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.948885   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:11.948890   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:11.948945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:11.989326   72712 cri.go:89] found id: ""
	I0425 20:06:11.989356   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.989365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:11.989371   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:11.989450   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:12.033912   72712 cri.go:89] found id: ""
	I0425 20:06:12.033943   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.033954   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:12.033959   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:12.034015   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:12.076170   72712 cri.go:89] found id: ""
	I0425 20:06:12.076199   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.076209   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:12.076217   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:12.076230   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:12.124851   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:12.124881   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:12.178927   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:12.178964   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:12.194925   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:12.194952   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:12.272163   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:12.272187   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:12.272202   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:10.374908   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.871613   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:10.917911   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.918144   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:15.419043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:11.886066   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.383880   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.851400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:14.869893   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:14.869967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:14.915793   72712 cri.go:89] found id: ""
	I0425 20:06:14.915820   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.915829   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:14.915836   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:14.915896   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:14.959549   72712 cri.go:89] found id: ""
	I0425 20:06:14.959576   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.959587   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:14.959606   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:14.959672   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:15.001420   72712 cri.go:89] found id: ""
	I0425 20:06:15.001453   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.001465   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:15.001474   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:15.001552   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:15.047960   72712 cri.go:89] found id: ""
	I0425 20:06:15.047988   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.047996   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:15.048001   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:15.048049   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:15.096688   72712 cri.go:89] found id: ""
	I0425 20:06:15.096722   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.096730   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:15.096736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:15.096795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:15.142673   72712 cri.go:89] found id: ""
	I0425 20:06:15.142701   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.142712   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:15.142719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:15.142784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:15.181729   72712 cri.go:89] found id: ""
	I0425 20:06:15.181757   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.181766   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:15.181773   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:15.181820   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:15.227858   72712 cri.go:89] found id: ""
	I0425 20:06:15.227886   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.227897   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:15.227905   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:15.227917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:15.283253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:15.283293   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:15.305572   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:15.305604   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:15.439587   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:15.439615   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:15.439631   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:15.525678   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:15.525714   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:14.872914   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.873605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:17.420065   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:19.917501   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.383915   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.883746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:20.884190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.078788   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:18.095012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:18.095083   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:18.136753   72712 cri.go:89] found id: ""
	I0425 20:06:18.136784   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.136796   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:18.136802   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:18.136850   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:18.184584   72712 cri.go:89] found id: ""
	I0425 20:06:18.184606   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.184614   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:18.184619   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:18.184691   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:18.228201   72712 cri.go:89] found id: ""
	I0425 20:06:18.228250   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.228263   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:18.228270   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:18.228326   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:18.267756   72712 cri.go:89] found id: ""
	I0425 20:06:18.267778   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.267786   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:18.267792   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:18.267855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:18.309727   72712 cri.go:89] found id: ""
	I0425 20:06:18.309755   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.309763   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:18.309769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:18.309827   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:18.350549   72712 cri.go:89] found id: ""
	I0425 20:06:18.350580   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.350592   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:18.350599   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:18.350656   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:18.393868   72712 cri.go:89] found id: ""
	I0425 20:06:18.393891   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.393902   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:18.393910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:18.393989   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:18.435163   72712 cri.go:89] found id: ""
	I0425 20:06:18.435195   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.435204   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:18.435211   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:18.435224   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:18.450871   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:18.450901   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:18.534501   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:18.534526   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:18.534538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:18.616979   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:18.617015   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:18.663568   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:18.663598   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.217744   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:21.235862   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:21.235955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:21.288966   72712 cri.go:89] found id: ""
	I0425 20:06:21.288996   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.289005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:21.289014   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:21.289075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:21.362068   72712 cri.go:89] found id: ""
	I0425 20:06:21.362092   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.362101   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:21.362108   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:21.362168   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:21.416870   72712 cri.go:89] found id: ""
	I0425 20:06:21.416894   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.416901   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:21.416907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:21.416956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:21.461465   72712 cri.go:89] found id: ""
	I0425 20:06:21.461495   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.461503   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:21.461508   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:21.461570   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:21.499985   72712 cri.go:89] found id: ""
	I0425 20:06:21.500014   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.500025   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:21.500032   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:21.500081   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:21.543725   72712 cri.go:89] found id: ""
	I0425 20:06:21.543764   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.543776   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:21.543784   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:21.543841   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:21.586535   72712 cri.go:89] found id: ""
	I0425 20:06:21.586566   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.586578   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:21.586587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:21.586644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:21.627885   72712 cri.go:89] found id: ""
	I0425 20:06:21.627912   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.627921   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:21.627929   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:21.627942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.685973   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:21.686006   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:21.702529   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:21.702556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:21.781634   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:21.781660   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:21.781673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:21.862986   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:21.863027   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:19.372142   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.374479   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.918699   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.419088   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:23.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:25.883438   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.413547   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:24.428247   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:24.428323   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:24.468708   72712 cri.go:89] found id: ""
	I0425 20:06:24.468757   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.468768   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:24.468775   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:24.468836   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:24.507667   72712 cri.go:89] found id: ""
	I0425 20:06:24.507694   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.507702   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:24.507708   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:24.507769   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:24.548537   72712 cri.go:89] found id: ""
	I0425 20:06:24.548562   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.548570   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:24.548576   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:24.548625   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:24.591240   72712 cri.go:89] found id: ""
	I0425 20:06:24.591264   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.591272   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:24.591280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:24.591325   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:24.631530   72712 cri.go:89] found id: ""
	I0425 20:06:24.631557   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.631568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:24.631575   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:24.631642   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:24.672878   72712 cri.go:89] found id: ""
	I0425 20:06:24.672903   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.672911   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:24.672916   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:24.672960   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:24.716168   72712 cri.go:89] found id: ""
	I0425 20:06:24.716193   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.716201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:24.716206   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:24.716256   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:24.758061   72712 cri.go:89] found id: ""
	I0425 20:06:24.758098   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.758110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:24.758122   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:24.758135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:24.839866   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:24.839900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:24.889288   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:24.889380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:24.946445   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:24.946488   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:24.963093   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:24.963126   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:25.044921   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:23.874297   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.372055   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.375436   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.916503   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.916669   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.887709   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.384645   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.545838   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:27.562659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:27.562717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:27.606462   72712 cri.go:89] found id: ""
	I0425 20:06:27.606491   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.606501   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:27.606509   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:27.606567   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:27.650475   72712 cri.go:89] found id: ""
	I0425 20:06:27.650505   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.650517   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:27.650524   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:27.650583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:27.695163   72712 cri.go:89] found id: ""
	I0425 20:06:27.695190   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.695201   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:27.695208   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:27.695265   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:27.741798   72712 cri.go:89] found id: ""
	I0425 20:06:27.741832   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.741842   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:27.741849   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:27.741904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:27.784146   72712 cri.go:89] found id: ""
	I0425 20:06:27.784175   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.784185   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:27.784193   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:27.784253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:27.827179   72712 cri.go:89] found id: ""
	I0425 20:06:27.827213   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.827225   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:27.827234   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:27.827298   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:27.872941   72712 cri.go:89] found id: ""
	I0425 20:06:27.872961   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.872980   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:27.872985   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:27.873040   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:27.917920   72712 cri.go:89] found id: ""
	I0425 20:06:27.917949   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.917959   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:27.917970   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:27.917985   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:27.971411   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:27.971455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:27.988704   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:27.988743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:28.064208   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:28.064229   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:28.064242   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:28.147388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:28.147427   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.694349   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:30.708595   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:30.708671   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:30.752963   72712 cri.go:89] found id: ""
	I0425 20:06:30.752994   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.753005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:30.753012   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:30.753073   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:30.795453   72712 cri.go:89] found id: ""
	I0425 20:06:30.795488   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.795498   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:30.795507   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:30.795574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:30.838945   72712 cri.go:89] found id: ""
	I0425 20:06:30.838970   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.838978   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:30.838984   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:30.839042   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:30.886128   72712 cri.go:89] found id: ""
	I0425 20:06:30.886160   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.886170   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:30.886178   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:30.886255   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:30.927773   72712 cri.go:89] found id: ""
	I0425 20:06:30.927805   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.927819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:30.927827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:30.927893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:30.968628   72712 cri.go:89] found id: ""
	I0425 20:06:30.968660   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.968672   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:30.968680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:30.968743   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:31.014590   72712 cri.go:89] found id: ""
	I0425 20:06:31.014616   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.014627   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:31.014634   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:31.014697   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:31.053236   72712 cri.go:89] found id: ""
	I0425 20:06:31.053262   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.053274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:31.053285   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:31.053301   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:31.107797   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:31.107834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:31.123675   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:31.123702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:31.201180   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:31.201204   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:31.201215   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:31.289474   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:31.289512   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.873981   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.373083   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.918572   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.420043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:35.421384   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:32.883164   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:34.883697   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.840828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:33.857736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:33.857795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:33.898621   72712 cri.go:89] found id: ""
	I0425 20:06:33.898647   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.898658   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:33.898665   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:33.898727   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:33.939211   72712 cri.go:89] found id: ""
	I0425 20:06:33.939234   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.939245   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:33.939250   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:33.939305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:33.981872   72712 cri.go:89] found id: ""
	I0425 20:06:33.981896   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.981903   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:33.981909   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:33.981965   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:34.027570   72712 cri.go:89] found id: ""
	I0425 20:06:34.027597   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.027609   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:34.027617   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:34.027675   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:34.072544   72712 cri.go:89] found id: ""
	I0425 20:06:34.072570   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.072586   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:34.072594   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:34.072674   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:34.119326   72712 cri.go:89] found id: ""
	I0425 20:06:34.119349   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.119358   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:34.119366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:34.119423   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:34.169618   72712 cri.go:89] found id: ""
	I0425 20:06:34.169642   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.169650   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:34.169655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:34.169705   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:34.213570   72712 cri.go:89] found id: ""
	I0425 20:06:34.213593   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.213601   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:34.213609   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:34.213621   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:34.255722   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:34.255756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:34.311113   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:34.311147   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:34.326869   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:34.326897   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:34.399765   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:34.399788   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:34.399801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:36.986610   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:37.003090   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:37.003163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:37.045929   72712 cri.go:89] found id: ""
	I0425 20:06:37.045956   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.045964   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:37.045969   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:37.046022   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:37.086835   72712 cri.go:89] found id: ""
	I0425 20:06:37.086868   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.086879   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:37.086885   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:37.086937   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:37.127454   72712 cri.go:89] found id: ""
	I0425 20:06:37.127479   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.127488   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:37.127494   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:37.127551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:37.168878   72712 cri.go:89] found id: ""
	I0425 20:06:37.168904   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.168917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:37.168924   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:37.168986   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:37.208859   72712 cri.go:89] found id: ""
	I0425 20:06:37.208889   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.208901   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:37.208914   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:37.208970   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:37.250407   72712 cri.go:89] found id: ""
	I0425 20:06:37.250439   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.250452   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:37.250467   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:37.250536   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:37.291004   72712 cri.go:89] found id: ""
	I0425 20:06:37.291040   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.291054   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:37.291063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:37.291125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:37.335573   72712 cri.go:89] found id: ""
	I0425 20:06:37.335597   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.335608   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:37.335619   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:37.335635   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:35.873065   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.371805   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.426152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:39.916340   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:36.884518   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.884859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.392773   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:37.392810   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:37.408311   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:37.408343   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:37.491376   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:37.491402   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:37.491416   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:37.574559   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:37.574600   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.125241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:40.142254   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:40.142347   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:40.186859   72712 cri.go:89] found id: ""
	I0425 20:06:40.186893   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.186904   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:40.186911   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:40.186972   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:40.229247   72712 cri.go:89] found id: ""
	I0425 20:06:40.229275   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.229288   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:40.229295   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:40.229361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:40.268853   72712 cri.go:89] found id: ""
	I0425 20:06:40.268879   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.268890   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:40.268897   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:40.268959   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:40.307621   72712 cri.go:89] found id: ""
	I0425 20:06:40.307650   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.307669   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:40.307677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:40.307732   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:40.351448   72712 cri.go:89] found id: ""
	I0425 20:06:40.351472   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.351484   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:40.351492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:40.351548   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:40.396771   72712 cri.go:89] found id: ""
	I0425 20:06:40.396804   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.396815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:40.396824   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:40.396890   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:40.443605   72712 cri.go:89] found id: ""
	I0425 20:06:40.443634   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.443642   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:40.443647   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:40.443694   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:40.495496   72712 cri.go:89] found id: ""
	I0425 20:06:40.495525   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.495536   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:40.495548   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:40.495563   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.539428   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:40.539457   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:40.596259   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:40.596305   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:40.613140   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:40.613167   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:40.701768   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:40.701793   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:40.701805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:40.372225   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:42.373541   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.916879   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.917783   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.386292   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.885441   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.294502   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:43.310041   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:43.310113   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:43.351841   72712 cri.go:89] found id: ""
	I0425 20:06:43.351864   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.351872   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:43.351877   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:43.351924   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:43.395467   72712 cri.go:89] found id: ""
	I0425 20:06:43.395497   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.395509   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:43.395516   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:43.395576   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:43.437256   72712 cri.go:89] found id: ""
	I0425 20:06:43.437354   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.437375   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:43.437384   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:43.437465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:43.480744   72712 cri.go:89] found id: ""
	I0425 20:06:43.480772   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.480783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:43.480791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:43.480839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:43.519916   72712 cri.go:89] found id: ""
	I0425 20:06:43.519951   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.519961   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:43.519975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:43.520039   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:43.557861   72712 cri.go:89] found id: ""
	I0425 20:06:43.557890   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.557901   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:43.557910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:43.557968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:43.594423   72712 cri.go:89] found id: ""
	I0425 20:06:43.594449   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.594458   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:43.594464   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:43.594512   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:43.632227   72712 cri.go:89] found id: ""
	I0425 20:06:43.632253   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.632262   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:43.632270   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:43.632281   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:43.688307   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:43.688336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:43.703382   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:43.703407   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:43.782073   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:43.782093   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:43.782109   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:43.872811   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:43.872842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:46.420420   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:46.435110   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:46.435174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:46.474019   72712 cri.go:89] found id: ""
	I0425 20:06:46.474044   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.474054   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:46.474067   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:46.474125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:46.517053   72712 cri.go:89] found id: ""
	I0425 20:06:46.517078   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.517088   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:46.517096   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:46.517150   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:46.560934   72712 cri.go:89] found id: ""
	I0425 20:06:46.560963   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.560972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:46.560977   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:46.561030   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:46.605969   72712 cri.go:89] found id: ""
	I0425 20:06:46.605997   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.606007   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:46.606012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:46.606061   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:46.647025   72712 cri.go:89] found id: ""
	I0425 20:06:46.647049   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.647058   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:46.647063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:46.647118   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:46.686931   72712 cri.go:89] found id: ""
	I0425 20:06:46.686956   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.686966   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:46.686975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:46.687053   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:46.727183   72712 cri.go:89] found id: ""
	I0425 20:06:46.727207   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.727216   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:46.727224   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:46.727277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:46.768030   72712 cri.go:89] found id: ""
	I0425 20:06:46.768059   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.768073   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:46.768085   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:46.768105   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:46.823400   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:46.823439   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:46.838443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:46.838468   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:46.919509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:46.919527   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:46.919538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:46.996250   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:46.996284   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:44.873706   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.874042   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:45.918619   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.418507   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.384559   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.884184   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.885081   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:49.542696   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:49.557346   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:49.557444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:49.595195   72712 cri.go:89] found id: ""
	I0425 20:06:49.595220   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.595231   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:49.595238   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:49.595305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:49.641324   72712 cri.go:89] found id: ""
	I0425 20:06:49.641354   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.641365   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:49.641373   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:49.641426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:49.681510   72712 cri.go:89] found id: ""
	I0425 20:06:49.681540   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.681552   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:49.681559   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:49.681620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:49.721482   72712 cri.go:89] found id: ""
	I0425 20:06:49.721509   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.721518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:49.721525   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:49.721581   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:49.762682   72712 cri.go:89] found id: ""
	I0425 20:06:49.762710   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.762723   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:49.762731   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:49.762793   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:49.801892   72712 cri.go:89] found id: ""
	I0425 20:06:49.801920   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.801932   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:49.801943   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:49.802002   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:49.840347   72712 cri.go:89] found id: ""
	I0425 20:06:49.840376   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.840387   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:49.840395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:49.840458   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:49.898486   72712 cri.go:89] found id: ""
	I0425 20:06:49.898516   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.898527   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:49.898536   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:49.898547   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:49.952735   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:49.952775   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:49.967986   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:49.968018   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:50.048003   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:50.048024   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:50.048040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:50.126062   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:50.126098   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:49.373031   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:51.873671   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.917641   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.418642   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.421542   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.384273   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.384393   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:52.679721   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:52.695636   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:52.695700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:52.738329   72712 cri.go:89] found id: ""
	I0425 20:06:52.738359   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.738368   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:52.738374   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:52.738420   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:52.779388   72712 cri.go:89] found id: ""
	I0425 20:06:52.779418   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.779426   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:52.779433   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:52.779496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:52.821105   72712 cri.go:89] found id: ""
	I0425 20:06:52.821137   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.821149   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:52.821168   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:52.821231   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:52.861781   72712 cri.go:89] found id: ""
	I0425 20:06:52.861817   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.861825   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:52.861831   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:52.861885   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:52.904602   72712 cri.go:89] found id: ""
	I0425 20:06:52.904633   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.904644   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:52.904651   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:52.904712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:52.951137   72712 cri.go:89] found id: ""
	I0425 20:06:52.951174   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.951183   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:52.951188   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:52.951234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:52.994199   72712 cri.go:89] found id: ""
	I0425 20:06:52.994249   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.994257   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:52.994262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:52.994315   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:53.031997   72712 cri.go:89] found id: ""
	I0425 20:06:53.032020   72712 logs.go:276] 0 containers: []
	W0425 20:06:53.032027   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:53.032035   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:53.032046   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:53.111351   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:53.111383   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:53.162470   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:53.162504   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:53.217188   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:53.217223   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:53.233071   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:53.233100   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:53.308983   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:55.809162   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:55.825185   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:55.825259   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:55.865963   72712 cri.go:89] found id: ""
	I0425 20:06:55.865989   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.866001   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:55.866009   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:55.866060   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:55.920565   72712 cri.go:89] found id: ""
	I0425 20:06:55.920601   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.920612   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:55.920620   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:55.920677   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:55.962643   72712 cri.go:89] found id: ""
	I0425 20:06:55.962669   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.962677   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:55.962684   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:55.962738   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:56.000737   72712 cri.go:89] found id: ""
	I0425 20:06:56.000764   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.000773   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:56.000782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:56.000828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:56.042226   72712 cri.go:89] found id: ""
	I0425 20:06:56.042251   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.042259   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:56.042265   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:56.042316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:56.080765   72712 cri.go:89] found id: ""
	I0425 20:06:56.080788   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.080798   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:56.080810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:56.080869   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:56.119563   72712 cri.go:89] found id: ""
	I0425 20:06:56.119590   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.119602   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:56.119608   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:56.119667   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:56.160136   72712 cri.go:89] found id: ""
	I0425 20:06:56.160162   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.160170   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:56.160179   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:56.160193   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:56.213506   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:56.213539   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:56.232121   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:56.232150   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:56.336606   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:56.336629   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:56.336640   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:56.426867   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:56.426908   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:54.374441   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:56.374847   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.916077   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.916521   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.384779   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.884281   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:58.975395   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:58.991064   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:58.991125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:59.031157   72712 cri.go:89] found id: ""
	I0425 20:06:59.031179   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.031190   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:59.031197   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:59.031253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:59.071893   72712 cri.go:89] found id: ""
	I0425 20:06:59.071923   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.071931   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:59.071937   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:59.071998   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:59.114714   72712 cri.go:89] found id: ""
	I0425 20:06:59.114749   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.114760   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:59.114768   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:59.114840   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:59.159482   72712 cri.go:89] found id: ""
	I0425 20:06:59.159510   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.159518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:59.159523   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:59.159575   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:59.201218   72712 cri.go:89] found id: ""
	I0425 20:06:59.201245   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.201253   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:59.201263   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:59.201312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:59.247277   72712 cri.go:89] found id: ""
	I0425 20:06:59.247305   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.247316   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:59.247324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:59.247379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:59.286713   72712 cri.go:89] found id: ""
	I0425 20:06:59.286738   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.286746   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:59.286751   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:59.286804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:59.332263   72712 cri.go:89] found id: ""
	I0425 20:06:59.332296   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.332320   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:59.332332   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:59.332346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:59.416446   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:59.416477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:59.462125   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:59.462166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:59.514881   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:59.514907   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:59.530109   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:59.530134   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:59.605820   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.106478   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:02.124859   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:02.124934   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:02.180491   72712 cri.go:89] found id: ""
	I0425 20:07:02.180526   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.180537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:02.180545   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:02.180601   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:02.237075   72712 cri.go:89] found id: ""
	I0425 20:07:02.237104   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.237118   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:02.237126   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:02.237190   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:02.295104   72712 cri.go:89] found id: ""
	I0425 20:07:02.295129   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.295140   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:02.295148   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:02.295210   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:02.335392   72712 cri.go:89] found id: ""
	I0425 20:07:02.335418   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.335428   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:02.335435   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:02.335496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:58.871748   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.372545   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.373424   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.917135   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.917504   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.885744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:04.385280   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:02.376964   72712 cri.go:89] found id: ""
	I0425 20:07:02.376990   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.377002   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:02.377009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:02.377066   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:02.415460   72712 cri.go:89] found id: ""
	I0425 20:07:02.415484   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.415491   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:02.415496   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:02.415550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:02.461946   72712 cri.go:89] found id: ""
	I0425 20:07:02.461972   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.461993   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:02.462009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:02.462075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:02.502829   72712 cri.go:89] found id: ""
	I0425 20:07:02.502851   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.502858   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:02.502866   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:02.502878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:02.558264   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:02.558296   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:02.574175   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:02.574225   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:02.649363   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.649389   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:02.649404   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:02.730528   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:02.730560   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.276648   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:05.292055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:05.292121   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:05.332849   72712 cri.go:89] found id: ""
	I0425 20:07:05.332874   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.332884   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:05.332892   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:05.332954   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:05.376446   72712 cri.go:89] found id: ""
	I0425 20:07:05.376475   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.376487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:05.376494   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:05.376556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:05.418635   72712 cri.go:89] found id: ""
	I0425 20:07:05.418664   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.418675   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:05.418682   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:05.418745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:05.459082   72712 cri.go:89] found id: ""
	I0425 20:07:05.459113   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.459123   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:05.459128   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:05.459175   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:05.498473   72712 cri.go:89] found id: ""
	I0425 20:07:05.498502   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.498514   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:05.498521   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:05.498578   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:05.543121   72712 cri.go:89] found id: ""
	I0425 20:07:05.543150   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.543159   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:05.543164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:05.543211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:05.585722   72712 cri.go:89] found id: ""
	I0425 20:07:05.585748   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.585758   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:05.585766   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:05.585826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:05.629614   72712 cri.go:89] found id: ""
	I0425 20:07:05.629647   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.629661   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:05.629671   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:05.629685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:05.683974   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:05.684007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:05.700651   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:05.700685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:05.782097   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:05.782127   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:05.782142   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:05.863881   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:05.863918   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.374553   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:07.872114   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.417080   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.417436   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:10.418259   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.885509   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:09.383078   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.412898   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:08.428152   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:08.428206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:08.468403   72712 cri.go:89] found id: ""
	I0425 20:07:08.468441   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.468455   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:08.468464   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:08.468529   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:08.511246   72712 cri.go:89] found id: ""
	I0425 20:07:08.511285   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.511297   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:08.511304   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:08.511363   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:08.553121   72712 cri.go:89] found id: ""
	I0425 20:07:08.553148   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.553155   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:08.553161   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:08.553214   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:08.589723   72712 cri.go:89] found id: ""
	I0425 20:07:08.589745   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.589755   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:08.589762   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:08.589826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:08.629502   72712 cri.go:89] found id: ""
	I0425 20:07:08.629525   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.629533   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:08.629538   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:08.629591   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:08.677107   72712 cri.go:89] found id: ""
	I0425 20:07:08.677144   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.677153   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:08.677164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:08.677212   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:08.716501   72712 cri.go:89] found id: ""
	I0425 20:07:08.716531   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.716542   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:08.716550   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:08.716611   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:08.763473   72712 cri.go:89] found id: ""
	I0425 20:07:08.763503   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.763515   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:08.763526   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:08.763543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:08.848961   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:08.848985   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:08.849000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:08.945851   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:08.945890   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:08.989429   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:08.989460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:09.042721   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:09.042756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.559400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:11.575100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:11.575180   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:11.613246   72712 cri.go:89] found id: ""
	I0425 20:07:11.613271   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.613284   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:11.613290   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:11.613351   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:11.655158   72712 cri.go:89] found id: ""
	I0425 20:07:11.655189   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.655200   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:11.655208   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:11.655266   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:11.695122   72712 cri.go:89] found id: ""
	I0425 20:07:11.695144   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.695151   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:11.695156   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:11.695205   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:11.735578   72712 cri.go:89] found id: ""
	I0425 20:07:11.735604   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.735615   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:11.735621   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:11.735680   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:11.774750   72712 cri.go:89] found id: ""
	I0425 20:07:11.774785   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.774795   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:11.774803   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:11.774855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:11.814878   72712 cri.go:89] found id: ""
	I0425 20:07:11.814908   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.814920   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:11.814939   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:11.815000   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:11.853262   72712 cri.go:89] found id: ""
	I0425 20:07:11.853295   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.853306   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:11.853313   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:11.853379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:11.897291   72712 cri.go:89] found id: ""
	I0425 20:07:11.897314   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.897324   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:11.897333   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:11.897348   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:11.956913   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:11.956945   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.973787   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:11.973821   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:12.055801   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:12.055826   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:12.055842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:12.140238   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:12.140270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:10.372634   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.374037   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.418299   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.919967   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:11.383994   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:13.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:15.884319   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.685296   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:14.699655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:14.699740   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:14.741907   72712 cri.go:89] found id: ""
	I0425 20:07:14.741936   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.741947   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:14.741955   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:14.742017   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:14.786457   72712 cri.go:89] found id: ""
	I0425 20:07:14.786479   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.786487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:14.786493   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:14.786537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:14.825010   72712 cri.go:89] found id: ""
	I0425 20:07:14.825042   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.825055   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:14.825063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:14.825124   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:14.874834   72712 cri.go:89] found id: ""
	I0425 20:07:14.874856   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.874867   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:14.874875   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:14.874933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:14.914636   72712 cri.go:89] found id: ""
	I0425 20:07:14.914674   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.914685   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:14.914693   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:14.914752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:14.959327   72712 cri.go:89] found id: ""
	I0425 20:07:14.959356   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.959365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:14.959372   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:14.959425   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:15.000637   72712 cri.go:89] found id: ""
	I0425 20:07:15.000666   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.000674   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:15.000680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:15.000728   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:15.040497   72712 cri.go:89] found id: ""
	I0425 20:07:15.040523   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.040531   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:15.040539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:15.040550   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:15.120206   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:15.120240   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:15.168292   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:15.168324   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:15.222133   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:15.222164   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:15.237719   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:15.237746   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:15.323404   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:14.872743   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.375231   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.420149   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:19.420277   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:18.384902   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:20.883469   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.823552   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:17.838837   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:17.838911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:17.880547   72712 cri.go:89] found id: ""
	I0425 20:07:17.880584   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.880595   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:17.880608   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:17.880669   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:17.929700   72712 cri.go:89] found id: ""
	I0425 20:07:17.929730   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.929742   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:17.929797   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:17.929861   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:17.974057   72712 cri.go:89] found id: ""
	I0425 20:07:17.974081   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.974088   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:17.974094   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:17.974142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:18.013173   72712 cri.go:89] found id: ""
	I0425 20:07:18.013200   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.013209   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:18.013215   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:18.013267   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:18.053525   72712 cri.go:89] found id: ""
	I0425 20:07:18.053557   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.053568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:18.053580   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:18.053644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:18.095972   72712 cri.go:89] found id: ""
	I0425 20:07:18.096004   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.096016   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:18.096024   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:18.096089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:18.136792   72712 cri.go:89] found id: ""
	I0425 20:07:18.136823   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.136834   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:18.136842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:18.136904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:18.176562   72712 cri.go:89] found id: ""
	I0425 20:07:18.176594   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.176605   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:18.176619   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:18.176634   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:18.254402   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:18.254440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:18.298075   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:18.298112   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:18.356091   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:18.356124   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:18.373788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:18.373822   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:18.452545   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:20.952752   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:20.972054   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:20.972133   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:21.015572   72712 cri.go:89] found id: ""
	I0425 20:07:21.015602   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.015613   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:21.015621   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:21.015689   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:21.053313   72712 cri.go:89] found id: ""
	I0425 20:07:21.053342   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.053352   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:21.053359   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:21.053422   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:21.090343   72712 cri.go:89] found id: ""
	I0425 20:07:21.090373   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.090384   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:21.090391   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:21.090472   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:21.127148   72712 cri.go:89] found id: ""
	I0425 20:07:21.127174   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.127184   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:21.127192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:21.127258   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:21.167175   72712 cri.go:89] found id: ""
	I0425 20:07:21.167199   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.167207   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:21.167212   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:21.167263   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:21.212740   72712 cri.go:89] found id: ""
	I0425 20:07:21.212771   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.212783   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:21.212791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:21.212856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:21.250751   72712 cri.go:89] found id: ""
	I0425 20:07:21.250774   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.250782   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:21.250788   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:21.250833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:21.292387   72712 cri.go:89] found id: ""
	I0425 20:07:21.292414   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.292426   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:21.292436   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:21.292451   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:21.337695   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:21.337726   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:21.395479   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:21.395520   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:21.411538   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:21.411564   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:21.493248   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:21.493270   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:21.493282   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:19.873680   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.372461   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:21.421770   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:23.426808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.883520   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.884554   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.076755   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:24.093549   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:24.093624   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:24.135660   72712 cri.go:89] found id: ""
	I0425 20:07:24.135686   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.135694   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:24.135705   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:24.135784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:24.179778   72712 cri.go:89] found id: ""
	I0425 20:07:24.179799   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.179807   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:24.179824   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:24.179883   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.226745   72712 cri.go:89] found id: ""
	I0425 20:07:24.226771   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.226780   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:24.226785   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:24.226839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:24.273302   72712 cri.go:89] found id: ""
	I0425 20:07:24.273327   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.273347   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:24.273354   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:24.273421   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:24.314117   72712 cri.go:89] found id: ""
	I0425 20:07:24.314149   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.314160   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:24.314167   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:24.314247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:24.353144   72712 cri.go:89] found id: ""
	I0425 20:07:24.353173   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.353184   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:24.353192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:24.353292   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:24.395899   72712 cri.go:89] found id: ""
	I0425 20:07:24.395925   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.395933   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:24.395938   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:24.395988   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:24.444470   72712 cri.go:89] found id: ""
	I0425 20:07:24.444503   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.444514   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:24.444525   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:24.444540   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:24.499845   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:24.499876   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:24.517421   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:24.517449   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:24.596509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:24.596530   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:24.596543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:24.710844   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:24.710878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.259541   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:27.275551   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:27.275609   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:27.314610   72712 cri.go:89] found id: ""
	I0425 20:07:27.314640   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.314651   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:27.314656   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:27.314712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:27.350100   72712 cri.go:89] found id: ""
	I0425 20:07:27.350132   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.350151   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:27.350158   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:27.350226   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.373886   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:26.873863   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:25.917794   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:28.417757   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:30.419922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.384565   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:29.385043   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.390197   72712 cri.go:89] found id: ""
	I0425 20:07:27.390238   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.390249   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:27.390257   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:27.390312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:27.431936   72712 cri.go:89] found id: ""
	I0425 20:07:27.431961   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.431973   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:27.431980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:27.432038   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:27.469175   72712 cri.go:89] found id: ""
	I0425 20:07:27.469204   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.469212   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:27.469218   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:27.469276   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:27.509385   72712 cri.go:89] found id: ""
	I0425 20:07:27.509416   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.509428   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:27.509436   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:27.509503   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:27.548997   72712 cri.go:89] found id: ""
	I0425 20:07:27.549034   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.549045   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:27.549052   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:27.549111   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:27.588925   72712 cri.go:89] found id: ""
	I0425 20:07:27.588959   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.588973   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:27.588985   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:27.589000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.635005   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:27.635040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:27.686587   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:27.686617   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:27.702913   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:27.702942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:27.775525   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:27.775551   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:27.775562   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.352358   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:30.367016   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:30.367088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:30.410878   72712 cri.go:89] found id: ""
	I0425 20:07:30.410906   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.410917   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:30.410927   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:30.410985   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:30.456150   72712 cri.go:89] found id: ""
	I0425 20:07:30.456173   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.456181   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:30.456186   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:30.456234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:30.495409   72712 cri.go:89] found id: ""
	I0425 20:07:30.495439   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.495450   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:30.495458   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:30.495516   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:30.535863   72712 cri.go:89] found id: ""
	I0425 20:07:30.535895   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.535906   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:30.535912   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:30.535971   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:30.573772   72712 cri.go:89] found id: ""
	I0425 20:07:30.573808   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.573819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:30.573826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:30.573892   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:30.626310   72712 cri.go:89] found id: ""
	I0425 20:07:30.626350   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.626362   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:30.626376   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:30.626438   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:30.666302   72712 cri.go:89] found id: ""
	I0425 20:07:30.666332   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.666343   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:30.666350   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:30.666413   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:30.703478   72712 cri.go:89] found id: ""
	I0425 20:07:30.703507   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.703519   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:30.703529   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:30.703543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:30.756532   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:30.756566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:30.772128   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:30.772158   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:30.853701   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:30.853728   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:30.853743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.935879   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:30.935917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:29.372219   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.872125   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:32.865998   72220 pod_ready.go:81] duration metric: took 4m0.000690329s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:32.866038   72220 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0425 20:07:32.866057   72220 pod_ready.go:38] duration metric: took 4m13.047288103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:32.866091   72220 kubeadm.go:591] duration metric: took 4m22.882679222s to restartPrimaryControlPlane
	W0425 20:07:32.866150   72220 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:32.866182   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:32.917319   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:35.421922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.886418   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.894776   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.483702   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:33.498238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:33.498310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:33.545696   72712 cri.go:89] found id: ""
	I0425 20:07:33.545723   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.545731   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:33.545737   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:33.545791   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:33.590808   72712 cri.go:89] found id: ""
	I0425 20:07:33.590837   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.590849   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:33.590857   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:33.590919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:33.634529   72712 cri.go:89] found id: ""
	I0425 20:07:33.634554   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.634562   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:33.634572   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:33.634640   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:33.679055   72712 cri.go:89] found id: ""
	I0425 20:07:33.679082   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.679093   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:33.679100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:33.679160   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:33.720653   72712 cri.go:89] found id: ""
	I0425 20:07:33.720686   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.720698   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:33.720706   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:33.720777   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:33.766163   72712 cri.go:89] found id: ""
	I0425 20:07:33.766221   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.766233   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:33.766241   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:33.766314   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:33.810804   72712 cri.go:89] found id: ""
	I0425 20:07:33.810830   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.810839   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:33.810844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:33.810908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:33.858109   72712 cri.go:89] found id: ""
	I0425 20:07:33.858140   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.858152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:33.858162   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:33.858176   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:33.926296   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:33.926333   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:33.944220   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:33.944249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:34.042119   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:34.042191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:34.042234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:34.143694   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:34.143732   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:36.691575   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:36.710408   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:36.710490   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:36.760097   72712 cri.go:89] found id: ""
	I0425 20:07:36.760135   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.760144   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:36.760150   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:36.760208   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:36.801508   72712 cri.go:89] found id: ""
	I0425 20:07:36.801532   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.801541   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:36.801546   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:36.801602   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:36.842293   72712 cri.go:89] found id: ""
	I0425 20:07:36.842328   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.842340   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:36.842355   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:36.842418   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:36.884101   72712 cri.go:89] found id: ""
	I0425 20:07:36.884131   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.884141   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:36.884149   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:36.884211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:36.925007   72712 cri.go:89] found id: ""
	I0425 20:07:36.925032   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.925039   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:36.925045   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:36.925109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:36.964975   72712 cri.go:89] found id: ""
	I0425 20:07:36.965009   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.965020   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:36.965028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:36.965088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:37.030956   72712 cri.go:89] found id: ""
	I0425 20:07:37.030987   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.030999   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:37.031007   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:37.031080   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:37.105919   72712 cri.go:89] found id: ""
	I0425 20:07:37.105946   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.105956   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:37.105967   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:37.105983   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:37.196376   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:37.196415   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:37.240296   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:37.240334   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:37.304336   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:37.304371   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:37.323146   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:37.323184   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:37.918245   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.418671   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:36.384384   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:38.387656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.883973   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	W0425 20:07:37.414563   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:39.915087   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:39.930987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:39.931068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:39.967641   72712 cri.go:89] found id: ""
	I0425 20:07:39.967682   72712 logs.go:276] 0 containers: []
	W0425 20:07:39.967693   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:39.967698   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:39.967755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:40.009924   72712 cri.go:89] found id: ""
	I0425 20:07:40.009951   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.009959   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:40.009969   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:40.010019   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:40.049644   72712 cri.go:89] found id: ""
	I0425 20:07:40.049675   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.049689   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:40.049697   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:40.049759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:40.090487   72712 cri.go:89] found id: ""
	I0425 20:07:40.090509   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.090519   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:40.090524   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:40.090583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:40.137634   72712 cri.go:89] found id: ""
	I0425 20:07:40.137664   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.137674   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:40.137681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:40.137745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:40.174832   72712 cri.go:89] found id: ""
	I0425 20:07:40.174863   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.174874   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:40.174882   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:40.174947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:40.212559   72712 cri.go:89] found id: ""
	I0425 20:07:40.212585   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.212593   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:40.212598   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:40.212687   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:40.253459   72712 cri.go:89] found id: ""
	I0425 20:07:40.253494   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.253506   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:40.253518   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:40.253533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:40.311253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:40.311288   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:40.326693   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:40.326722   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:40.405792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:40.405816   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:40.405831   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:40.486712   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:40.486749   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:42.419025   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:44.916387   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:41.387375   72304 pod_ready.go:81] duration metric: took 4m0.010411263s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:41.387396   72304 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:07:41.387402   72304 pod_ready.go:38] duration metric: took 4m6.083068398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:41.387414   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:07:41.387441   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:41.387498   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:41.459873   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:41.459899   72304 cri.go:89] found id: ""
	I0425 20:07:41.459907   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:41.459960   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.465470   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:41.465534   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:41.509504   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:41.509523   72304 cri.go:89] found id: ""
	I0425 20:07:41.509530   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:41.509584   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.515012   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:41.515070   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:41.562701   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:41.562727   72304 cri.go:89] found id: ""
	I0425 20:07:41.562737   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:41.562792   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.567856   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:41.567928   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:41.618411   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:41.618441   72304 cri.go:89] found id: ""
	I0425 20:07:41.618452   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:41.618510   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.625757   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:41.625826   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:41.672707   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:41.672734   72304 cri.go:89] found id: ""
	I0425 20:07:41.672741   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:41.672785   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.678040   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:41.678119   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:41.725172   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:41.725196   72304 cri.go:89] found id: ""
	I0425 20:07:41.725205   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:41.725264   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.730651   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:41.730718   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:41.777224   72304 cri.go:89] found id: ""
	I0425 20:07:41.777269   72304 logs.go:276] 0 containers: []
	W0425 20:07:41.777280   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:41.777290   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:41.777380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:41.821498   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:41.821524   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:41.821531   72304 cri.go:89] found id: ""
	I0425 20:07:41.821541   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:41.821599   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.827065   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.831900   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:41.831924   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:41.893198   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:41.893233   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:41.909141   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:41.909169   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:42.051260   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:42.051305   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:42.109173   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:42.109214   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:42.155862   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:42.155894   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:42.222430   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:42.222466   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:42.265323   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:42.265353   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:42.316534   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:42.316569   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:42.363543   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:42.363568   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:42.422389   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:42.422421   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:42.471230   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:42.471259   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.011223   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.011263   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:45.578411   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:45.597748   72304 api_server.go:72] duration metric: took 4m16.066757074s to wait for apiserver process to appear ...
	I0425 20:07:45.597777   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:07:45.597813   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:45.597861   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:45.649452   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:45.649481   72304 cri.go:89] found id: ""
	I0425 20:07:45.649491   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:45.649534   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.654965   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:45.655023   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:45.701151   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:45.701177   72304 cri.go:89] found id: ""
	I0425 20:07:45.701186   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:45.701238   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.706702   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:45.706767   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:45.763142   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:45.763167   72304 cri.go:89] found id: ""
	I0425 20:07:45.763177   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:45.763220   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.768626   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:45.768684   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:45.816615   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:45.816648   72304 cri.go:89] found id: ""
	I0425 20:07:45.816656   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:45.816701   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.822714   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:45.822790   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:45.875652   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:45.875678   72304 cri.go:89] found id: ""
	I0425 20:07:45.875688   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:45.875737   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.881649   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:45.881719   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:45.930631   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:45.930656   72304 cri.go:89] found id: ""
	I0425 20:07:45.930666   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:45.930721   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.939712   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:45.939783   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:45.984646   72304 cri.go:89] found id: ""
	I0425 20:07:45.984684   72304 logs.go:276] 0 containers: []
	W0425 20:07:45.984693   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:45.984699   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:45.984754   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:46.029752   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.029777   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.029782   72304 cri.go:89] found id: ""
	I0425 20:07:46.029789   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:46.029845   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.035189   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.040479   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:46.040503   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:46.101469   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:46.101509   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:46.167362   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:46.167401   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:46.217732   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:46.217759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:46.264372   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:46.264404   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:43.037730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:43.064471   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:43.064550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:43.130075   72712 cri.go:89] found id: ""
	I0425 20:07:43.130111   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.130129   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:43.130136   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:43.130195   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:43.169628   72712 cri.go:89] found id: ""
	I0425 20:07:43.169663   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.169675   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:43.169682   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:43.169748   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:43.214845   72712 cri.go:89] found id: ""
	I0425 20:07:43.214869   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.214877   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:43.214883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:43.214929   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:43.263047   72712 cri.go:89] found id: ""
	I0425 20:07:43.263069   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.263078   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:43.263083   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:43.263142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:43.313179   72712 cri.go:89] found id: ""
	I0425 20:07:43.313213   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.313223   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:43.313231   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:43.313295   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:43.353440   72712 cri.go:89] found id: ""
	I0425 20:07:43.353468   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.353480   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:43.353488   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:43.353546   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:43.392261   72712 cri.go:89] found id: ""
	I0425 20:07:43.392288   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.392296   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:43.392321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:43.392378   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:43.431111   72712 cri.go:89] found id: ""
	I0425 20:07:43.431139   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.431147   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:43.431155   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:43.431165   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:43.485087   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:43.485120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:43.501508   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:43.501536   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:43.586041   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:43.586073   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:43.586089   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.663194   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.663232   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:46.218461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:46.233195   72712 kubeadm.go:591] duration metric: took 4m4.06065248s to restartPrimaryControlPlane
	W0425 20:07:46.233281   72712 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:46.233311   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:48.166680   72712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.933342568s)
	I0425 20:07:48.166771   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:48.185391   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:07:48.198250   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:07:48.209825   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:07:48.209843   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:07:48.209897   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:07:48.220854   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:07:48.220909   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:07:48.231518   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:07:48.241515   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:07:48.241589   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:07:48.251764   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.261762   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:07:48.261813   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.271952   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:07:48.281914   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:07:48.281986   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:07:48.292879   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:07:48.372322   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:07:48.372460   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:07:48.529730   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:07:48.529854   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:07:48.529979   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:07:48.753171   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:07:48.755473   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:07:48.755590   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:07:48.755692   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:07:48.755809   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:07:48.755905   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:07:48.756132   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:07:48.756317   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:07:48.756867   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:07:48.757498   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:07:48.758073   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:07:48.758581   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:07:48.758745   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:07:48.758842   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:07:48.894873   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:07:48.946907   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:07:49.084938   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:07:49.201925   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:07:49.219675   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:07:49.220891   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:07:49.220951   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:07:49.387310   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:07:46.917886   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:48.919793   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:46.324627   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:46.324653   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:46.382068   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:46.382102   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.424672   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:46.424709   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.466659   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:46.466692   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:46.484868   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:46.484898   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:46.614688   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:46.614720   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:46.666805   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:46.666846   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:47.098854   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:47.098899   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:49.653042   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:07:49.657843   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:07:49.659251   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:07:49.659285   72304 api_server.go:131] duration metric: took 4.061499319s to wait for apiserver health ...
	I0425 20:07:49.659295   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:07:49.659321   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:49.659380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:49.709699   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:49.709721   72304 cri.go:89] found id: ""
	I0425 20:07:49.709729   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:49.709795   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.715369   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:49.715429   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:49.773517   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:49.773544   72304 cri.go:89] found id: ""
	I0425 20:07:49.773554   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:49.773617   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.778984   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:49.779071   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:49.825707   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:49.825739   72304 cri.go:89] found id: ""
	I0425 20:07:49.825746   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:49.825790   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.830613   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:49.830678   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:49.872068   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:49.872094   72304 cri.go:89] found id: ""
	I0425 20:07:49.872104   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:49.872166   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.877311   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:49.877383   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:49.930182   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:49.930216   72304 cri.go:89] found id: ""
	I0425 20:07:49.930228   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:49.930283   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.935415   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:49.935484   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:49.985377   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:49.985404   72304 cri.go:89] found id: ""
	I0425 20:07:49.985412   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:49.985469   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.991021   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:49.991092   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:50.037755   72304 cri.go:89] found id: ""
	I0425 20:07:50.037787   72304 logs.go:276] 0 containers: []
	W0425 20:07:50.037802   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:50.037811   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:50.037875   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:50.083706   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.083731   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.083735   72304 cri.go:89] found id: ""
	I0425 20:07:50.083742   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:50.083793   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.088730   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.094339   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:50.094371   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:50.161538   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:50.161573   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.204178   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:50.204211   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.251315   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:50.251344   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:50.315859   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:50.315886   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:50.367787   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:50.367829   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:50.429509   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:50.429541   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:50.488723   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:50.488759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:50.506838   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:50.506879   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:50.629496   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:50.629526   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:50.689286   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:50.689321   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:50.731343   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:50.731373   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:50.772085   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:50.772114   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:49.389887   72712 out.go:204]   - Booting up control plane ...
	I0425 20:07:49.390011   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:07:49.395060   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:07:49.398108   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:07:49.398220   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:07:49.402596   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:07:53.651817   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:07:53.651845   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.651850   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.651854   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.651859   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.651862   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.651865   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.651872   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.651878   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.651885   72304 system_pods.go:74] duration metric: took 3.992584481s to wait for pod list to return data ...
	I0425 20:07:53.651892   72304 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:07:53.654617   72304 default_sa.go:45] found service account: "default"
	I0425 20:07:53.654641   72304 default_sa.go:55] duration metric: took 2.742232ms for default service account to be created ...
	I0425 20:07:53.654649   72304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:07:53.660082   72304 system_pods.go:86] 8 kube-system pods found
	I0425 20:07:53.660110   72304 system_pods.go:89] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.660116   72304 system_pods.go:89] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.660121   72304 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.660127   72304 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.660131   72304 system_pods.go:89] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.660135   72304 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.660142   72304 system_pods.go:89] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.660148   72304 system_pods.go:89] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.660154   72304 system_pods.go:126] duration metric: took 5.50043ms to wait for k8s-apps to be running ...
	I0425 20:07:53.660161   72304 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:07:53.660201   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:53.677461   72304 system_svc.go:56] duration metric: took 17.289854ms WaitForService to wait for kubelet
	I0425 20:07:53.677499   72304 kubeadm.go:576] duration metric: took 4m24.146512306s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:07:53.677524   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:07:53.681527   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:07:53.681562   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:07:53.681576   72304 node_conditions.go:105] duration metric: took 4.045221ms to run NodePressure ...
	I0425 20:07:53.681591   72304 start.go:240] waiting for startup goroutines ...
	I0425 20:07:53.681605   72304 start.go:245] waiting for cluster config update ...
	I0425 20:07:53.681622   72304 start.go:254] writing updated cluster config ...
	I0425 20:07:53.682002   72304 ssh_runner.go:195] Run: rm -f paused
	I0425 20:07:53.732056   72304 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:07:53.734302   72304 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142196" cluster and "default" namespace by default
	I0425 20:07:51.419808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:53.916090   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:55.917139   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:58.417609   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:00.917152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:02.918628   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.419508   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.765908   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.899694836s)
	I0425 20:08:05.765989   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:05.787711   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:08:05.801717   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:08:05.813710   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:08:05.813741   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:08:05.813802   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:08:05.825122   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:08:05.825202   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:08:05.837118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:08:05.848807   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:08:05.848880   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:08:05.862028   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.873795   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:08:05.873919   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.885577   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:08:05.897605   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:08:05.897685   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:08:05.909284   72220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:08:05.965574   72220 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 20:08:05.965663   72220 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:08:06.133359   72220 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:08:06.133525   72220 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:08:06.133675   72220 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:08:06.391437   72220 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:08:06.393805   72220 out.go:204]   - Generating certificates and keys ...
	I0425 20:08:06.393905   72220 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:08:06.393994   72220 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:08:06.394121   72220 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:08:06.394237   72220 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:08:06.394332   72220 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:08:06.394417   72220 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:08:06.394514   72220 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:08:06.396093   72220 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:08:06.396202   72220 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:08:06.396300   72220 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:08:06.396358   72220 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:08:06.396423   72220 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:08:06.683452   72220 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:08:06.778456   72220 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 20:08:06.923709   72220 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:08:07.079685   72220 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:08:07.170533   72220 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:08:07.171070   72220 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:08:07.173798   72220 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:08:07.175699   72220 out.go:204]   - Booting up control plane ...
	I0425 20:08:07.175824   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:08:07.175924   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:08:07.176060   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:08:07.197685   72220 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:08:07.200579   72220 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:08:07.200645   72220 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:08:07.354665   72220 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 20:08:07.354779   72220 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 20:08:07.855900   72220 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.56346ms
	I0425 20:08:07.856015   72220 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 20:08:07.423114   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:09.425115   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:13.358654   72220 kubeadm.go:309] [api-check] The API server is healthy after 5.502458238s
	I0425 20:08:13.388381   72220 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 20:08:13.908867   72220 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 20:08:13.945417   72220 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 20:08:13.945708   72220 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-744552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 20:08:13.959901   72220 kubeadm.go:309] [bootstrap-token] Using token: r2mxoe.iuelddsr8gvoq1wo
	I0425 20:08:13.961409   72220 out.go:204]   - Configuring RBAC rules ...
	I0425 20:08:13.961552   72220 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 20:08:13.970435   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 20:08:13.978933   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 20:08:13.982503   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 20:08:13.987029   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 20:08:13.990969   72220 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 20:08:14.103051   72220 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 20:08:14.554715   72220 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 20:08:15.105951   72220 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 20:08:15.107134   72220 kubeadm.go:309] 
	I0425 20:08:15.107222   72220 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 20:08:15.107236   72220 kubeadm.go:309] 
	I0425 20:08:15.107336   72220 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 20:08:15.107349   72220 kubeadm.go:309] 
	I0425 20:08:15.107379   72220 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 20:08:15.107463   72220 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 20:08:15.107550   72220 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 20:08:15.107560   72220 kubeadm.go:309] 
	I0425 20:08:15.107657   72220 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 20:08:15.107668   72220 kubeadm.go:309] 
	I0425 20:08:15.107735   72220 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 20:08:15.107747   72220 kubeadm.go:309] 
	I0425 20:08:15.107807   72220 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 20:08:15.107935   72220 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 20:08:15.108030   72220 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 20:08:15.108042   72220 kubeadm.go:309] 
	I0425 20:08:15.108154   72220 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 20:08:15.108269   72220 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 20:08:15.108280   72220 kubeadm.go:309] 
	I0425 20:08:15.108395   72220 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.108556   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed \
	I0425 20:08:15.108594   72220 kubeadm.go:309] 	--control-plane 
	I0425 20:08:15.108603   72220 kubeadm.go:309] 
	I0425 20:08:15.108719   72220 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 20:08:15.108730   72220 kubeadm.go:309] 
	I0425 20:08:15.108849   72220 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.109004   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed 
	I0425 20:08:15.109717   72220 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:08:15.109778   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:08:15.109797   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:08:15.111712   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:08:11.918414   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:14.420753   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:15.113288   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:08:15.129693   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:08:15.157631   72220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:08:15.157709   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.157760   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-744552 minikube.k8s.io/updated_at=2024_04_25T20_08_15_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=no-preload-744552 minikube.k8s.io/primary=true
	I0425 20:08:15.374198   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.418592   72220 ops.go:34] apiserver oom_adj: -16
	I0425 20:08:15.874721   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.374969   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.875091   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.375038   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.874685   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:18.374802   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.917617   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:19.421721   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:18.874931   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.374961   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.874349   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.374787   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.875130   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.374959   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.874325   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.374798   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.875034   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:23.374899   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.917898   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:22.917132   71966 pod_ready.go:81] duration metric: took 4m0.007062693s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:08:22.917156   71966 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:08:22.917164   71966 pod_ready.go:38] duration metric: took 4m4.548150095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:22.917179   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:22.917211   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:22.917270   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:22.982604   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:22.982631   71966 cri.go:89] found id: ""
	I0425 20:08:22.982640   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:22.982698   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:22.988558   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:22.988618   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:23.031937   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.031964   71966 cri.go:89] found id: ""
	I0425 20:08:23.031973   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:23.032031   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.037315   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:23.037371   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:23.089839   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.089862   71966 cri.go:89] found id: ""
	I0425 20:08:23.089872   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:23.089936   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.095247   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:23.095309   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:23.136257   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.136286   71966 cri.go:89] found id: ""
	I0425 20:08:23.136294   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:23.136357   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.142548   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:23.142608   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:23.186190   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.186229   71966 cri.go:89] found id: ""
	I0425 20:08:23.186239   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:23.186301   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.191422   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:23.191494   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:23.242326   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.242361   71966 cri.go:89] found id: ""
	I0425 20:08:23.242371   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:23.242437   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.248578   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:23.248642   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:23.286781   71966 cri.go:89] found id: ""
	I0425 20:08:23.286807   71966 logs.go:276] 0 containers: []
	W0425 20:08:23.286817   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:23.286823   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:23.286885   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:23.334728   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:23.334754   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.334761   71966 cri.go:89] found id: ""
	I0425 20:08:23.334770   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:23.334831   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.340288   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.344787   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:23.344808   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:23.401830   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:23.401865   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:23.425683   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:23.425715   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:23.568527   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:23.568558   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.608747   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:23.608776   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.647962   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:23.647996   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.687270   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:23.687308   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:23.745081   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:23.745112   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:23.799375   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:23.799405   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.853199   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:23.853232   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.896535   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:23.896571   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.964317   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:23.964350   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:24.013196   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:24.013231   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:23.874275   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.374250   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.874396   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.374767   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.874968   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.374333   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.874916   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.374369   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.499044   72220 kubeadm.go:1107] duration metric: took 12.341393953s to wait for elevateKubeSystemPrivileges
	W0425 20:08:27.499078   72220 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 20:08:27.499087   72220 kubeadm.go:393] duration metric: took 5m17.572541498s to StartCluster
	I0425 20:08:27.499108   72220 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.499189   72220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:08:27.500940   72220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.501192   72220 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:08:27.503257   72220 out.go:177] * Verifying Kubernetes components...
	I0425 20:08:27.501308   72220 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:08:27.501405   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:08:27.505389   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:08:27.505403   72220 addons.go:69] Setting storage-provisioner=true in profile "no-preload-744552"
	I0425 20:08:27.505438   72220 addons.go:234] Setting addon storage-provisioner=true in "no-preload-744552"
	W0425 20:08:27.505453   72220 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:08:27.505490   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505505   72220 addons.go:69] Setting metrics-server=true in profile "no-preload-744552"
	I0425 20:08:27.505535   72220 addons.go:234] Setting addon metrics-server=true in "no-preload-744552"
	W0425 20:08:27.505546   72220 addons.go:243] addon metrics-server should already be in state true
	I0425 20:08:27.505574   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505895   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.505922   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.505492   72220 addons.go:69] Setting default-storageclass=true in profile "no-preload-744552"
	I0425 20:08:27.505990   72220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-744552"
	I0425 20:08:27.505952   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506099   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.506418   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506467   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.523666   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0425 20:08:27.526950   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0425 20:08:27.526972   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.526981   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I0425 20:08:27.527536   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527606   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527662   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.527683   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528039   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528059   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528122   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528228   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528242   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528601   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528644   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528712   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.528735   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.528800   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.529228   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.529246   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.532151   72220 addons.go:234] Setting addon default-storageclass=true in "no-preload-744552"
	W0425 20:08:27.532171   72220 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:08:27.532204   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.532543   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.532582   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.547165   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0425 20:08:27.547700   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.548354   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.548368   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.548675   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.548793   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.550640   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.554301   72220 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:08:27.553061   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0425 20:08:27.553099   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0425 20:08:27.555613   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:08:27.555630   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:08:27.555652   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.556177   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556181   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556724   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556739   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.556868   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556879   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.557128   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.557700   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.557729   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.558142   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.558406   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.559420   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.559990   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.560057   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.560076   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.560177   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.560333   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.560549   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.560967   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.562839   72220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:08:27.564442   72220 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.564480   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:08:27.564517   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.567912   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.568153   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.568171   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.570321   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.570514   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.570709   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.570945   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.578396   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
	I0425 20:08:27.586629   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.587070   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.587082   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.587584   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.587736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.589708   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.589937   72220 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.589948   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:08:27.589961   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.592640   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.592983   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.593007   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.593261   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.593541   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.593736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.593906   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.783858   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:08:27.820917   72220 node_ready.go:35] waiting up to 6m0s for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832349   72220 node_ready.go:49] node "no-preload-744552" has status "Ready":"True"
	I0425 20:08:27.832377   72220 node_ready.go:38] duration metric: took 11.423909ms for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832390   72220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:27.844475   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:27.886461   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:08:27.886483   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:08:27.899413   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.931511   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.935073   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:08:27.935098   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:08:27.989052   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:27.989082   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:08:28.016326   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:28.551863   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551894   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.551964   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551976   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552255   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552280   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552292   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552315   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552358   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.552397   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552405   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552414   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552421   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552571   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552597   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552710   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552736   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.578416   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.578445   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.578730   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.578776   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.578789   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.945831   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.945861   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946170   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946191   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946214   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.946224   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946531   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946549   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946560   72220 addons.go:470] Verifying addon metrics-server=true in "no-preload-744552"
	I0425 20:08:28.946570   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.948485   72220 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:08:27.005360   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:27.024856   71966 api_server.go:72] duration metric: took 4m14.401244231s to wait for apiserver process to appear ...
	I0425 20:08:27.024881   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:27.024922   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:27.024982   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:27.072098   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:27.072129   71966 cri.go:89] found id: ""
	I0425 20:08:27.072140   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:27.072210   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.077726   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:27.077793   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:27.118834   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:27.118855   71966 cri.go:89] found id: ""
	I0425 20:08:27.118864   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:27.118917   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.125277   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:27.125347   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:27.167036   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.167064   71966 cri.go:89] found id: ""
	I0425 20:08:27.167074   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:27.167131   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.172390   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:27.172468   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:27.212933   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:27.212957   71966 cri.go:89] found id: ""
	I0425 20:08:27.212967   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:27.213022   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.218033   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:27.218083   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:27.259294   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:27.259321   71966 cri.go:89] found id: ""
	I0425 20:08:27.259331   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:27.259384   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.265537   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:27.265610   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:27.312145   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:27.312174   71966 cri.go:89] found id: ""
	I0425 20:08:27.312183   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:27.312240   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.318346   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:27.318405   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:27.362467   71966 cri.go:89] found id: ""
	I0425 20:08:27.362495   71966 logs.go:276] 0 containers: []
	W0425 20:08:27.362504   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:27.362509   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:27.362569   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:27.406810   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:27.406834   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.406839   71966 cri.go:89] found id: ""
	I0425 20:08:27.406846   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:27.406903   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.412431   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.421695   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:27.421725   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.472832   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:27.472863   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.535799   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:27.535830   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:28.004964   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:28.005006   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:28.072378   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:28.072417   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:28.236479   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:28.236523   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:28.296095   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:28.296133   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:28.351290   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:28.351314   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:28.400529   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:28.400567   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:28.459149   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:28.459178   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:28.507818   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:28.507844   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:28.565596   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:28.565627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:28.588509   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:28.588535   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:29.403321   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:08:29.403717   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:29.404001   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:28.950127   72220 addons.go:505] duration metric: took 1.448816058s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:08:29.862142   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:30.851653   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.851677   72220 pod_ready.go:81] duration metric: took 3.007171918s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.851689   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857090   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.857108   72220 pod_ready.go:81] duration metric: took 5.412841ms for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857117   72220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863315   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.863331   72220 pod_ready.go:81] duration metric: took 6.207835ms for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863339   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867557   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.867579   72220 pod_ready.go:81] duration metric: took 4.23311ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867590   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872391   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.872407   72220 pod_ready.go:81] duration metric: took 4.810397ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872415   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249226   72220 pod_ready.go:92] pod "kube-proxy-22w7x" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.249259   72220 pod_ready.go:81] duration metric: took 376.837327ms for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249284   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649908   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.649934   72220 pod_ready.go:81] duration metric: took 400.641991ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649945   72220 pod_ready.go:38] duration metric: took 3.817541056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:31.649962   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:31.650025   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:31.684094   72220 api_server.go:72] duration metric: took 4.182865357s to wait for apiserver process to appear ...
	I0425 20:08:31.684123   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:31.684146   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:08:31.689688   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:08:31.690939   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.690963   72220 api_server.go:131] duration metric: took 6.831773ms to wait for apiserver health ...
	I0425 20:08:31.690973   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.853816   72220 system_pods.go:59] 9 kube-system pods found
	I0425 20:08:31.853849   72220 system_pods.go:61] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:31.853856   72220 system_pods.go:61] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:31.853861   72220 system_pods.go:61] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:31.853868   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:31.853872   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:31.853877   72220 system_pods.go:61] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:31.853881   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:31.853889   72220 system_pods.go:61] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:31.853894   72220 system_pods.go:61] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:31.853907   72220 system_pods.go:74] duration metric: took 162.928561ms to wait for pod list to return data ...
	I0425 20:08:31.853916   72220 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:32.049906   72220 default_sa.go:45] found service account: "default"
	I0425 20:08:32.049932   72220 default_sa.go:55] duration metric: took 196.003422ms for default service account to be created ...
	I0425 20:08:32.049942   72220 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:32.255245   72220 system_pods.go:86] 9 kube-system pods found
	I0425 20:08:32.255290   72220 system_pods.go:89] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:32.255298   72220 system_pods.go:89] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:32.255304   72220 system_pods.go:89] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:32.255311   72220 system_pods.go:89] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:32.255317   72220 system_pods.go:89] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:32.255322   72220 system_pods.go:89] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:32.255328   72220 system_pods.go:89] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:32.255338   72220 system_pods.go:89] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:32.255348   72220 system_pods.go:89] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:32.255368   72220 system_pods.go:126] duration metric: took 205.41905ms to wait for k8s-apps to be running ...
	I0425 20:08:32.255378   72220 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:32.255429   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:32.274141   72220 system_svc.go:56] duration metric: took 18.75721ms WaitForService to wait for kubelet
	I0425 20:08:32.274173   72220 kubeadm.go:576] duration metric: took 4.77294686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:32.274198   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:32.449699   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:32.449727   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:32.449741   72220 node_conditions.go:105] duration metric: took 175.536406ms to run NodePressure ...
	I0425 20:08:32.449755   72220 start.go:240] waiting for startup goroutines ...
	I0425 20:08:32.449765   72220 start.go:245] waiting for cluster config update ...
	I0425 20:08:32.449778   72220 start.go:254] writing updated cluster config ...
	I0425 20:08:32.450108   72220 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:32.503317   72220 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:32.505391   72220 out.go:177] * Done! kubectl is now configured to use "no-preload-744552" cluster and "default" namespace by default
	I0425 20:08:31.153636   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:08:31.158526   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:08:31.159775   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.159817   71966 api_server.go:131] duration metric: took 4.134911832s to wait for apiserver health ...
	I0425 20:08:31.159827   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.159847   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:31.159890   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:31.201597   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:31.201616   71966 cri.go:89] found id: ""
	I0425 20:08:31.201625   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:31.201667   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.206973   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:31.207039   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:31.248400   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:31.248424   71966 cri.go:89] found id: ""
	I0425 20:08:31.248435   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:31.248496   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.253822   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:31.253879   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:31.298921   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:31.298946   71966 cri.go:89] found id: ""
	I0425 20:08:31.298956   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:31.299003   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.304691   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:31.304758   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:31.351773   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:31.351796   71966 cri.go:89] found id: ""
	I0425 20:08:31.351804   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:31.351851   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.356599   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:31.356651   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:31.399655   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:31.399678   71966 cri.go:89] found id: ""
	I0425 20:08:31.399686   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:31.399740   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.405103   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:31.405154   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:31.452763   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:31.452785   71966 cri.go:89] found id: ""
	I0425 20:08:31.452794   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:31.452840   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.457788   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:31.457838   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:31.503746   71966 cri.go:89] found id: ""
	I0425 20:08:31.503780   71966 logs.go:276] 0 containers: []
	W0425 20:08:31.503791   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:31.503798   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:31.503868   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:31.548517   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:31.548543   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:31.548555   71966 cri.go:89] found id: ""
	I0425 20:08:31.548565   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:31.548631   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.553673   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.558271   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:31.558290   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:31.974349   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:31.974387   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:32.033292   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:32.033327   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:32.050762   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:32.050791   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:32.101591   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:32.101627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:32.142626   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:32.142652   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:32.203270   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:32.203315   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:32.247021   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:32.247048   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:32.294900   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:32.294936   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:32.353902   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:32.353934   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:32.488543   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:32.488584   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:32.569303   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:32.569358   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:32.622767   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:32.622802   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:35.181779   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:08:35.181813   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.181820   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.181826   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.181832   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.181837   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.181843   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.181851   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.181858   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.181867   71966 system_pods.go:74] duration metric: took 4.022033823s to wait for pod list to return data ...
	I0425 20:08:35.181879   71966 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:35.185387   71966 default_sa.go:45] found service account: "default"
	I0425 20:08:35.185413   71966 default_sa.go:55] duration metric: took 3.523751ms for default service account to be created ...
	I0425 20:08:35.185423   71966 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:35.195075   71966 system_pods.go:86] 8 kube-system pods found
	I0425 20:08:35.195099   71966 system_pods.go:89] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.195104   71966 system_pods.go:89] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.195109   71966 system_pods.go:89] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.195114   71966 system_pods.go:89] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.195118   71966 system_pods.go:89] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.195122   71966 system_pods.go:89] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.195128   71966 system_pods.go:89] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.195133   71966 system_pods.go:89] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.195139   71966 system_pods.go:126] duration metric: took 9.711803ms to wait for k8s-apps to be running ...
	I0425 20:08:35.195155   71966 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:35.195195   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:35.213494   71966 system_svc.go:56] duration metric: took 18.331225ms WaitForService to wait for kubelet
	I0425 20:08:35.213523   71966 kubeadm.go:576] duration metric: took 4m22.589912913s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:35.213545   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:35.216461   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:35.216481   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:35.216493   71966 node_conditions.go:105] duration metric: took 2.94061ms to run NodePressure ...
	I0425 20:08:35.216502   71966 start.go:240] waiting for startup goroutines ...
	I0425 20:08:35.216509   71966 start.go:245] waiting for cluster config update ...
	I0425 20:08:35.216518   71966 start.go:254] writing updated cluster config ...
	I0425 20:08:35.216750   71966 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:35.265836   71966 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:35.269026   71966 out.go:177] * Done! kubectl is now configured to use "embed-certs-512173" cluster and "default" namespace by default
	I0425 20:08:34.404410   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:34.404662   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:44.405293   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:44.405518   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:04.406406   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:04.406676   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.407969   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:44.408240   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.408259   72712 kubeadm.go:309] 
	I0425 20:09:44.408293   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:09:44.408355   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:09:44.408373   72712 kubeadm.go:309] 
	I0425 20:09:44.408417   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:09:44.408448   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:09:44.408562   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:09:44.408575   72712 kubeadm.go:309] 
	I0425 20:09:44.408655   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:09:44.408684   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:09:44.408711   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:09:44.408718   72712 kubeadm.go:309] 
	I0425 20:09:44.408812   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:09:44.408912   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:09:44.408939   72712 kubeadm.go:309] 
	I0425 20:09:44.409085   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:09:44.409217   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:09:44.409341   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:09:44.409418   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:09:44.409433   72712 kubeadm.go:309] 
	I0425 20:09:44.410319   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:09:44.410423   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:09:44.410510   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0425 20:09:44.410640   72712 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0425 20:09:44.410700   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:09:45.395830   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:09:45.412628   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:09:45.423387   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:09:45.423412   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:09:45.423465   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:09:45.434317   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:09:45.434389   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:09:45.445657   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:09:45.455698   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:09:45.455772   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:09:45.466137   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.476140   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:09:45.476192   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.486410   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:09:45.495465   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:09:45.495522   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:09:45.505410   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:09:45.726416   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:11:42.214574   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:11:42.214715   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0425 20:11:42.216323   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:11:42.216393   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:11:42.216507   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:11:42.216650   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:11:42.216795   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:11:42.216882   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:11:42.218766   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:11:42.218847   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:11:42.218923   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:11:42.219042   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:11:42.219103   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:11:42.219167   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:11:42.219237   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:11:42.219321   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:11:42.219407   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:11:42.219519   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:11:42.219639   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:11:42.219694   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:11:42.219742   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:11:42.219786   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:11:42.219831   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:11:42.219883   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:11:42.219929   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:11:42.220029   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:11:42.220139   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:11:42.220204   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:11:42.220308   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:11:42.222891   72712 out.go:204]   - Booting up control plane ...
	I0425 20:11:42.222979   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:11:42.223054   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:11:42.223129   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:11:42.223222   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:11:42.223404   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:11:42.223459   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:11:42.223565   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.223835   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.223937   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224165   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224243   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224457   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224541   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224799   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224902   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.225125   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.225134   72712 kubeadm.go:309] 
	I0425 20:11:42.225166   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:11:42.225204   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:11:42.225210   72712 kubeadm.go:309] 
	I0425 20:11:42.225239   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:11:42.225267   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:11:42.225352   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:11:42.225358   72712 kubeadm.go:309] 
	I0425 20:11:42.225446   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:11:42.225476   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:11:42.225522   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:11:42.225533   72712 kubeadm.go:309] 
	I0425 20:11:42.225626   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:11:42.225714   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:11:42.225729   72712 kubeadm.go:309] 
	I0425 20:11:42.225875   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:11:42.225951   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:11:42.226022   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:11:42.226096   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:11:42.226129   72712 kubeadm.go:309] 
	I0425 20:11:42.226162   72712 kubeadm.go:393] duration metric: took 8m0.122692927s to StartCluster
	I0425 20:11:42.226242   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:11:42.226299   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:11:42.283295   72712 cri.go:89] found id: ""
	I0425 20:11:42.283320   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.283329   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:11:42.283335   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:11:42.283389   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:11:42.322462   72712 cri.go:89] found id: ""
	I0425 20:11:42.322493   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.322505   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:11:42.322512   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:11:42.322574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:11:42.372329   72712 cri.go:89] found id: ""
	I0425 20:11:42.372355   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.372363   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:11:42.372369   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:11:42.372416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:11:42.420348   72712 cri.go:89] found id: ""
	I0425 20:11:42.420374   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.420382   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:11:42.420389   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:11:42.420447   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:11:42.460274   72712 cri.go:89] found id: ""
	I0425 20:11:42.460317   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.460329   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:11:42.460337   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:11:42.460395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:11:42.503828   72712 cri.go:89] found id: ""
	I0425 20:11:42.503855   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.503867   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:11:42.503874   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:11:42.503933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:11:42.545045   72712 cri.go:89] found id: ""
	I0425 20:11:42.545070   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.545086   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:11:42.545095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:11:42.545156   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:11:42.586389   72712 cri.go:89] found id: ""
	I0425 20:11:42.586413   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.586421   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:11:42.586429   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:11:42.586440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:11:42.602835   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:11:42.602863   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:11:42.695131   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:11:42.695153   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:11:42.695168   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:11:42.819889   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:11:42.819922   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:11:42.869446   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:11:42.869474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0425 20:11:42.927184   72712 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0425 20:11:42.927236   72712 out.go:239] * 
	W0425 20:11:42.927291   72712 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.927311   72712 out.go:239] * 
	W0425 20:11:42.928275   72712 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 20:11:42.931353   72712 out.go:177] 
	W0425 20:11:42.932654   72712 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.932696   72712 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0425 20:11:42.932713   72712 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0425 20:11:42.934227   72712 out.go:177] 
	
	
	==> CRI-O <==
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.748923306Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076254748898975,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58257c69-afab-4593-b0d1-f15481dcc176 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.749596731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ab5e65a-30b8-4ed9-a886-c31730accaa5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.749673497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ab5e65a-30b8-4ed9-a886-c31730accaa5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.749892604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f60cda47620ae0c7640d7f7b8531567b0f12ca7f4be1b5ae77939138e3bfce0c,PodSandboxId:39b55182ea9b9b9511d89190e753e0dcbacdd59e1609c3c3c5acbccb3b80bb66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709956145126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxxt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44599c42-87cd-44ff-9377-fd52993919f6,},Annotations:map[string]string{io.kubernetes.container.hash: 8edb01ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35dd66e9dd75e198b6914d960aa13c56c30942fed1b2eab52fa6f605277304a1,PodSandboxId:2e920db1f861f5161dd3eaf69ba95be9ed1eaa121acd8414ed1b9d2347affe6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709907761921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdl2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f11bf4f-f370-4957-95a1-773d255d227b,},Annotations:map[string]string{io.kubernetes.container.hash: dcf79dd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d7cefd108b725836b24bb8c42f8a398b76c5c5d495b7ea5d653ac67a685582,PodSandboxId:a361913ed2fb8ee1bcad23cf095cbb3983f37812a3222b5ca86f6d5848f3c615,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714075709104778853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22w7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dda9cd-3cf5-4fdd-b4b6-f88e0360f513,},Annotations:map[string]string{io.kubernetes.container.hash: a4be3b58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280170b99deab6385c954bfdfe114fe4be6b971d8ec047921e7c36e2c62323b,PodSandboxId:33348938e34cfc8db6dd875de4fdec025925b7a31657ce148254ce89ebae9eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171407570906
3034608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1960de28-d946-4cfb-99fd-dd89fd7f6e67,},Annotations:map[string]string{io.kubernetes.container.hash: ccd0a75c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d760a6bfe9ed83526ada5d764e3d80fab995270530df7bb4df4733e3fe72bdfb,PodSandboxId:8dda842c6e476549593da9aaf1c47aa24817c668b78f816e5c20239ecab56b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075688424967058,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a480a53c7855225626492dfd8c653ea3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eb87cf504ed4707126c6b0f5e37b36ebfc7801efd594d7450f75ae6d82c303,PodSandboxId:742c3330a0a89d0dd9ef08ebac6b4b284024139c3dce81ec2bf9994ab0402882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075688392667005,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 747e3598f2fa1ffc2618ff97b0571488,},Annotations:map[string]string{io.kubernetes.container.hash: 829b1439,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d02c3f6172773ca8b2e3501f4c826c0d7e90c1d1b9df69650fc9b06fbfc1e09,PodSandboxId:85d4601c24196f706b84b44e7e24a48f53e20aa45629b1291a23ecd091b7a940,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075688316745642,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b282287dd65b57af6e5aa6ec38640dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7405b686b29b061435baf005e384e6b2c3cfdb12bf75325a1723414682df0f,PodSandboxId:7cdef08fe249bf9417509aef7b10bbc9536e2fe03517e1684c4d6f66c3191ef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075688323330202,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80048aa3ed845c1d63441fe380468533,},Annotations:map[string]string{io.kubernetes.container.hash: a6e99913,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ab5e65a-30b8-4ed9-a886-c31730accaa5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.800815666Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba1a0687-11a4-468f-8e9a-afe92055571e name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.800925960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba1a0687-11a4-468f-8e9a-afe92055571e name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.802189084Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8ee856b-da5d-4f80-b898-853233aefd5a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.802656584Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076254802631280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8ee856b-da5d-4f80-b898-853233aefd5a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.803493406Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8db1e6d3-0b0d-423c-b8a3-b0e494354f2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.803547023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8db1e6d3-0b0d-423c-b8a3-b0e494354f2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.803723002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f60cda47620ae0c7640d7f7b8531567b0f12ca7f4be1b5ae77939138e3bfce0c,PodSandboxId:39b55182ea9b9b9511d89190e753e0dcbacdd59e1609c3c3c5acbccb3b80bb66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709956145126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxxt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44599c42-87cd-44ff-9377-fd52993919f6,},Annotations:map[string]string{io.kubernetes.container.hash: 8edb01ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35dd66e9dd75e198b6914d960aa13c56c30942fed1b2eab52fa6f605277304a1,PodSandboxId:2e920db1f861f5161dd3eaf69ba95be9ed1eaa121acd8414ed1b9d2347affe6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709907761921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdl2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f11bf4f-f370-4957-95a1-773d255d227b,},Annotations:map[string]string{io.kubernetes.container.hash: dcf79dd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d7cefd108b725836b24bb8c42f8a398b76c5c5d495b7ea5d653ac67a685582,PodSandboxId:a361913ed2fb8ee1bcad23cf095cbb3983f37812a3222b5ca86f6d5848f3c615,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714075709104778853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22w7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dda9cd-3cf5-4fdd-b4b6-f88e0360f513,},Annotations:map[string]string{io.kubernetes.container.hash: a4be3b58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280170b99deab6385c954bfdfe114fe4be6b971d8ec047921e7c36e2c62323b,PodSandboxId:33348938e34cfc8db6dd875de4fdec025925b7a31657ce148254ce89ebae9eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171407570906
3034608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1960de28-d946-4cfb-99fd-dd89fd7f6e67,},Annotations:map[string]string{io.kubernetes.container.hash: ccd0a75c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d760a6bfe9ed83526ada5d764e3d80fab995270530df7bb4df4733e3fe72bdfb,PodSandboxId:8dda842c6e476549593da9aaf1c47aa24817c668b78f816e5c20239ecab56b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075688424967058,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a480a53c7855225626492dfd8c653ea3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eb87cf504ed4707126c6b0f5e37b36ebfc7801efd594d7450f75ae6d82c303,PodSandboxId:742c3330a0a89d0dd9ef08ebac6b4b284024139c3dce81ec2bf9994ab0402882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075688392667005,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 747e3598f2fa1ffc2618ff97b0571488,},Annotations:map[string]string{io.kubernetes.container.hash: 829b1439,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d02c3f6172773ca8b2e3501f4c826c0d7e90c1d1b9df69650fc9b06fbfc1e09,PodSandboxId:85d4601c24196f706b84b44e7e24a48f53e20aa45629b1291a23ecd091b7a940,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075688316745642,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b282287dd65b57af6e5aa6ec38640dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7405b686b29b061435baf005e384e6b2c3cfdb12bf75325a1723414682df0f,PodSandboxId:7cdef08fe249bf9417509aef7b10bbc9536e2fe03517e1684c4d6f66c3191ef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075688323330202,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80048aa3ed845c1d63441fe380468533,},Annotations:map[string]string{io.kubernetes.container.hash: a6e99913,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8db1e6d3-0b0d-423c-b8a3-b0e494354f2a name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.854492262Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab9bbd6c-2725-4924-ab88-d37494400975 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.854571617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab9bbd6c-2725-4924-ab88-d37494400975 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.856229104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f30746d6-8659-4672-a967-f9192d031507 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.856699518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076254856672628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f30746d6-8659-4672-a967-f9192d031507 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.857663463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc313c50-2ccb-48a2-90ab-b645476a4bef name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.857750833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc313c50-2ccb-48a2-90ab-b645476a4bef name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.857927824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f60cda47620ae0c7640d7f7b8531567b0f12ca7f4be1b5ae77939138e3bfce0c,PodSandboxId:39b55182ea9b9b9511d89190e753e0dcbacdd59e1609c3c3c5acbccb3b80bb66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709956145126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxxt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44599c42-87cd-44ff-9377-fd52993919f6,},Annotations:map[string]string{io.kubernetes.container.hash: 8edb01ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35dd66e9dd75e198b6914d960aa13c56c30942fed1b2eab52fa6f605277304a1,PodSandboxId:2e920db1f861f5161dd3eaf69ba95be9ed1eaa121acd8414ed1b9d2347affe6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709907761921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdl2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f11bf4f-f370-4957-95a1-773d255d227b,},Annotations:map[string]string{io.kubernetes.container.hash: dcf79dd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d7cefd108b725836b24bb8c42f8a398b76c5c5d495b7ea5d653ac67a685582,PodSandboxId:a361913ed2fb8ee1bcad23cf095cbb3983f37812a3222b5ca86f6d5848f3c615,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714075709104778853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22w7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dda9cd-3cf5-4fdd-b4b6-f88e0360f513,},Annotations:map[string]string{io.kubernetes.container.hash: a4be3b58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280170b99deab6385c954bfdfe114fe4be6b971d8ec047921e7c36e2c62323b,PodSandboxId:33348938e34cfc8db6dd875de4fdec025925b7a31657ce148254ce89ebae9eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171407570906
3034608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1960de28-d946-4cfb-99fd-dd89fd7f6e67,},Annotations:map[string]string{io.kubernetes.container.hash: ccd0a75c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d760a6bfe9ed83526ada5d764e3d80fab995270530df7bb4df4733e3fe72bdfb,PodSandboxId:8dda842c6e476549593da9aaf1c47aa24817c668b78f816e5c20239ecab56b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075688424967058,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a480a53c7855225626492dfd8c653ea3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eb87cf504ed4707126c6b0f5e37b36ebfc7801efd594d7450f75ae6d82c303,PodSandboxId:742c3330a0a89d0dd9ef08ebac6b4b284024139c3dce81ec2bf9994ab0402882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075688392667005,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 747e3598f2fa1ffc2618ff97b0571488,},Annotations:map[string]string{io.kubernetes.container.hash: 829b1439,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d02c3f6172773ca8b2e3501f4c826c0d7e90c1d1b9df69650fc9b06fbfc1e09,PodSandboxId:85d4601c24196f706b84b44e7e24a48f53e20aa45629b1291a23ecd091b7a940,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075688316745642,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b282287dd65b57af6e5aa6ec38640dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7405b686b29b061435baf005e384e6b2c3cfdb12bf75325a1723414682df0f,PodSandboxId:7cdef08fe249bf9417509aef7b10bbc9536e2fe03517e1684c4d6f66c3191ef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075688323330202,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80048aa3ed845c1d63441fe380468533,},Annotations:map[string]string{io.kubernetes.container.hash: a6e99913,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc313c50-2ccb-48a2-90ab-b645476a4bef name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.901793843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f84ba4dc-b54f-4f77-aa96-50cc1a9242e7 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.901890324Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f84ba4dc-b54f-4f77-aa96-50cc1a9242e7 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.903201467Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38f06b62-f392-4d54-b201-ed0a410ffa1a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.903709340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076254903683199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38f06b62-f392-4d54-b201-ed0a410ffa1a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.904863223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bb72396-0888-4fb2-98e9-4465d7350b01 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.904944948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bb72396-0888-4fb2-98e9-4465d7350b01 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:34 no-preload-744552 crio[729]: time="2024-04-25 20:17:34.905133533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f60cda47620ae0c7640d7f7b8531567b0f12ca7f4be1b5ae77939138e3bfce0c,PodSandboxId:39b55182ea9b9b9511d89190e753e0dcbacdd59e1609c3c3c5acbccb3b80bb66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709956145126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxxt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44599c42-87cd-44ff-9377-fd52993919f6,},Annotations:map[string]string{io.kubernetes.container.hash: 8edb01ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35dd66e9dd75e198b6914d960aa13c56c30942fed1b2eab52fa6f605277304a1,PodSandboxId:2e920db1f861f5161dd3eaf69ba95be9ed1eaa121acd8414ed1b9d2347affe6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709907761921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdl2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f11bf4f-f370-4957-95a1-773d255d227b,},Annotations:map[string]string{io.kubernetes.container.hash: dcf79dd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d7cefd108b725836b24bb8c42f8a398b76c5c5d495b7ea5d653ac67a685582,PodSandboxId:a361913ed2fb8ee1bcad23cf095cbb3983f37812a3222b5ca86f6d5848f3c615,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714075709104778853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22w7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dda9cd-3cf5-4fdd-b4b6-f88e0360f513,},Annotations:map[string]string{io.kubernetes.container.hash: a4be3b58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280170b99deab6385c954bfdfe114fe4be6b971d8ec047921e7c36e2c62323b,PodSandboxId:33348938e34cfc8db6dd875de4fdec025925b7a31657ce148254ce89ebae9eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171407570906
3034608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1960de28-d946-4cfb-99fd-dd89fd7f6e67,},Annotations:map[string]string{io.kubernetes.container.hash: ccd0a75c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d760a6bfe9ed83526ada5d764e3d80fab995270530df7bb4df4733e3fe72bdfb,PodSandboxId:8dda842c6e476549593da9aaf1c47aa24817c668b78f816e5c20239ecab56b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075688424967058,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a480a53c7855225626492dfd8c653ea3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eb87cf504ed4707126c6b0f5e37b36ebfc7801efd594d7450f75ae6d82c303,PodSandboxId:742c3330a0a89d0dd9ef08ebac6b4b284024139c3dce81ec2bf9994ab0402882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075688392667005,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 747e3598f2fa1ffc2618ff97b0571488,},Annotations:map[string]string{io.kubernetes.container.hash: 829b1439,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d02c3f6172773ca8b2e3501f4c826c0d7e90c1d1b9df69650fc9b06fbfc1e09,PodSandboxId:85d4601c24196f706b84b44e7e24a48f53e20aa45629b1291a23ecd091b7a940,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075688316745642,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b282287dd65b57af6e5aa6ec38640dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7405b686b29b061435baf005e384e6b2c3cfdb12bf75325a1723414682df0f,PodSandboxId:7cdef08fe249bf9417509aef7b10bbc9536e2fe03517e1684c4d6f66c3191ef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075688323330202,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80048aa3ed845c1d63441fe380468533,},Annotations:map[string]string{io.kubernetes.container.hash: a6e99913,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6bb72396-0888-4fb2-98e9-4465d7350b01 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f60cda47620ae       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   39b55182ea9b9       coredns-7db6d8ff4d-2mxxt
	35dd66e9dd75e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   2e920db1f861f       coredns-7db6d8ff4d-xdl2d
	39d7cefd108b7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   a361913ed2fb8       kube-proxy-22w7x
	9280170b99dea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   33348938e34cf       storage-provisioner
	d760a6bfe9ed8       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   8dda842c6e476       kube-scheduler-no-preload-744552
	a5eb87cf504ed       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   742c3330a0a89       kube-apiserver-no-preload-744552
	cd7405b686b29       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   7cdef08fe249b       etcd-no-preload-744552
	0d02c3f617277       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   85d4601c24196       kube-controller-manager-no-preload-744552
	
	
	==> coredns [35dd66e9dd75e198b6914d960aa13c56c30942fed1b2eab52fa6f605277304a1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f60cda47620ae0c7640d7f7b8531567b0f12ca7f4be1b5ae77939138e3bfce0c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-744552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-744552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=no-preload-744552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T20_08_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 20:08:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-744552
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 20:17:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 20:13:41 +0000   Thu, 25 Apr 2024 20:08:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 20:13:41 +0000   Thu, 25 Apr 2024 20:08:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 20:13:41 +0000   Thu, 25 Apr 2024 20:08:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 20:13:41 +0000   Thu, 25 Apr 2024 20:08:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.142
	  Hostname:    no-preload-744552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 38800b8f3279411fb3268a56d385002c
	  System UUID:                38800b8f-3279-411f-b326-8a56d385002c
	  Boot ID:                    30963a51-cffd-4030-bc24-715b76ee9a9f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2mxxt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 coredns-7db6d8ff4d-xdl2d                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-744552                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-no-preload-744552             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-no-preload-744552    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-22w7x                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-no-preload-744552             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-zpj9f              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m5s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node no-preload-744552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node no-preload-744552 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node no-preload-744552 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m9s   node-controller  Node no-preload-744552 event: Registered Node no-preload-744552 in Controller
	
	
	==> dmesg <==
	[  +0.052182] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043472] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.630967] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.472947] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.710164] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.324199] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.055598] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066901] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.208122] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.147239] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.315615] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[Apr25 20:03] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.065708] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.313424] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +5.670561] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.582042] kauditd_printk_skb: 79 callbacks suppressed
	[Apr25 20:08] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.628851] systemd-fstab-generator[4014]: Ignoring "noauto" option for root device
	[  +4.470713] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.430215] systemd-fstab-generator[4338]: Ignoring "noauto" option for root device
	[ +13.484501] systemd-fstab-generator[4551]: Ignoring "noauto" option for root device
	[  +0.118508] kauditd_printk_skb: 14 callbacks suppressed
	[Apr25 20:09] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [cd7405b686b29b061435baf005e384e6b2c3cfdb12bf75325a1723414682df0f] <==
	{"level":"info","ts":"2024-04-25T20:08:08.751468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9be3859290e499ce switched to configuration voters=(11232968760134769102)"}
	{"level":"info","ts":"2024-04-25T20:08:08.751807Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7a995cf908c9189","local-member-id":"9be3859290e499ce","added-peer-id":"9be3859290e499ce","added-peer-peer-urls":["https://192.168.72.142:2380"]}
	{"level":"info","ts":"2024-04-25T20:08:08.822274Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-25T20:08:08.822589Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9be3859290e499ce","initial-advertise-peer-urls":["https://192.168.72.142:2380"],"listen-peer-urls":["https://192.168.72.142:2380"],"advertise-client-urls":["https://192.168.72.142:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.142:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-25T20:08:08.822667Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-25T20:08:08.822809Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.142:2380"}
	{"level":"info","ts":"2024-04-25T20:08:08.822853Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.142:2380"}
	{"level":"info","ts":"2024-04-25T20:08:09.100463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9be3859290e499ce is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-25T20:08:09.100539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9be3859290e499ce became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-25T20:08:09.100571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9be3859290e499ce received MsgPreVoteResp from 9be3859290e499ce at term 1"}
	{"level":"info","ts":"2024-04-25T20:08:09.100585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9be3859290e499ce became candidate at term 2"}
	{"level":"info","ts":"2024-04-25T20:08:09.100592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9be3859290e499ce received MsgVoteResp from 9be3859290e499ce at term 2"}
	{"level":"info","ts":"2024-04-25T20:08:09.1006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9be3859290e499ce became leader at term 2"}
	{"level":"info","ts":"2024-04-25T20:08:09.100608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9be3859290e499ce elected leader 9be3859290e499ce at term 2"}
	{"level":"info","ts":"2024-04-25T20:08:09.10465Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T20:08:09.108744Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9be3859290e499ce","local-member-attributes":"{Name:no-preload-744552 ClientURLs:[https://192.168.72.142:2379]}","request-path":"/0/members/9be3859290e499ce/attributes","cluster-id":"7a995cf908c9189","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T20:08:09.108925Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T20:08:09.109288Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T20:08:09.109509Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T20:08:09.109555Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T20:08:09.11728Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.142:2379"}
	{"level":"info","ts":"2024-04-25T20:08:09.122068Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-25T20:08:09.15349Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7a995cf908c9189","local-member-id":"9be3859290e499ce","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T20:08:09.153598Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T20:08:09.153625Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 20:17:35 up 15 min,  0 users,  load average: 0.44, 0.27, 0.19
	Linux no-preload-744552 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a5eb87cf504ed4707126c6b0f5e37b36ebfc7801efd594d7450f75ae6d82c303] <==
	I0425 20:11:29.602239       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:13:11.323150       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:13:11.323462       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0425 20:13:12.324624       1 handler_proxy.go:93] no RequestInfo found in the context
	W0425 20:13:12.324671       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:13:12.324802       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:13:12.324812       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0425 20:13:12.324891       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:13:12.326183       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:14:12.325642       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:14:12.326081       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:14:12.326153       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:14:12.326317       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:14:12.326683       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:14:12.327686       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:16:12.326573       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:16:12.326730       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:16:12.326746       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:16:12.328673       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:16:12.328887       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:16:12.328934       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0d02c3f6172773ca8b2e3501f4c826c0d7e90c1d1b9df69650fc9b06fbfc1e09] <==
	I0425 20:11:57.258063       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:12:26.715845       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:12:27.266818       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:12:56.721753       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:12:57.276281       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:13:26.727413       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:13:27.285044       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:13:56.733975       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:13:57.292834       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:14:26.740920       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:14:27.303251       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0425 20:14:32.483873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="236.042µs"
	I0425 20:14:47.479314       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="112.07µs"
	E0425 20:14:56.746789       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:14:57.314091       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:15:26.752947       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:15:27.322166       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:15:56.758604       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:15:57.331305       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:16:26.766754       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:16:27.342931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:16:56.773483       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:16:57.354633       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:17:26.778870       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:17:27.364163       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [39d7cefd108b725836b24bb8c42f8a398b76c5c5d495b7ea5d653ac67a685582] <==
	I0425 20:08:29.390429       1 server_linux.go:69] "Using iptables proxy"
	I0425 20:08:29.406480       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.142"]
	I0425 20:08:29.470274       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 20:08:29.470312       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 20:08:29.470328       1 server_linux.go:165] "Using iptables Proxier"
	I0425 20:08:29.475766       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 20:08:29.476127       1 server.go:872] "Version info" version="v1.30.0"
	I0425 20:08:29.476183       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 20:08:29.479022       1 config.go:192] "Starting service config controller"
	I0425 20:08:29.479079       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 20:08:29.479124       1 config.go:101] "Starting endpoint slice config controller"
	I0425 20:08:29.479140       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 20:08:29.479873       1 config.go:319] "Starting node config controller"
	I0425 20:08:29.481849       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 20:08:29.579899       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 20:08:29.579929       1 shared_informer.go:320] Caches are synced for service config
	I0425 20:08:29.586330       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d760a6bfe9ed83526ada5d764e3d80fab995270530df7bb4df4733e3fe72bdfb] <==
	E0425 20:08:11.345228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 20:08:11.344193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 20:08:11.345279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 20:08:11.344240       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 20:08:11.345326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 20:08:11.345491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 20:08:11.345706       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 20:08:11.345756       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 20:08:12.193677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 20:08:12.193727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 20:08:12.214268       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 20:08:12.214413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 20:08:12.255532       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 20:08:12.255666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0425 20:08:12.258952       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 20:08:12.259039       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 20:08:12.437924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 20:08:12.438053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 20:08:12.438140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 20:08:12.438863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 20:08:12.604314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 20:08:12.604538       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 20:08:12.670764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 20:08:12.670820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0425 20:08:15.437873       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 25 20:15:14 no-preload-744552 kubelet[4345]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:15:14 no-preload-744552 kubelet[4345]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:15:14 no-preload-744552 kubelet[4345]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:15:16 no-preload-744552 kubelet[4345]: E0425 20:15:16.461247    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:15:29 no-preload-744552 kubelet[4345]: E0425 20:15:29.461592    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:15:41 no-preload-744552 kubelet[4345]: E0425 20:15:41.461744    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:15:54 no-preload-744552 kubelet[4345]: E0425 20:15:54.463091    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:16:05 no-preload-744552 kubelet[4345]: E0425 20:16:05.461516    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:16:14 no-preload-744552 kubelet[4345]: E0425 20:16:14.504482    4345 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:16:14 no-preload-744552 kubelet[4345]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:16:14 no-preload-744552 kubelet[4345]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:16:14 no-preload-744552 kubelet[4345]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:16:14 no-preload-744552 kubelet[4345]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:16:16 no-preload-744552 kubelet[4345]: E0425 20:16:16.461075    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:16:31 no-preload-744552 kubelet[4345]: E0425 20:16:31.461840    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:16:43 no-preload-744552 kubelet[4345]: E0425 20:16:43.461000    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:16:56 no-preload-744552 kubelet[4345]: E0425 20:16:56.463157    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:17:08 no-preload-744552 kubelet[4345]: E0425 20:17:08.461717    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:17:14 no-preload-744552 kubelet[4345]: E0425 20:17:14.502692    4345 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:17:14 no-preload-744552 kubelet[4345]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:17:14 no-preload-744552 kubelet[4345]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:17:14 no-preload-744552 kubelet[4345]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:17:14 no-preload-744552 kubelet[4345]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:17:21 no-preload-744552 kubelet[4345]: E0425 20:17:21.462550    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:17:34 no-preload-744552 kubelet[4345]: E0425 20:17:34.465501    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	
	
	==> storage-provisioner [9280170b99deab6385c954bfdfe114fe4be6b971d8ec047921e7c36e2c62323b] <==
	I0425 20:08:29.224574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0425 20:08:29.261335       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0425 20:08:29.261521       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0425 20:08:29.278839       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0425 20:08:29.279099       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-744552_1f08e049-89b8-4094-bce1-23bc472ee6e9!
	I0425 20:08:29.279896       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eefb9920-1470-4da9-b4fc-8c0df48631f6", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-744552_1f08e049-89b8-4094-bce1-23bc472ee6e9 became leader
	I0425 20:08:29.380482       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-744552_1f08e049-89b8-4094-bce1-23bc472ee6e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-744552 -n no-preload-744552
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-744552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-zpj9f
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-744552 describe pod metrics-server-569cc877fc-zpj9f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-744552 describe pod metrics-server-569cc877fc-zpj9f: exit status 1 (69.068363ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-zpj9f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-744552 describe pod metrics-server-569cc877fc-zpj9f: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0425 20:08:36.328044   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 20:08:48.491393   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 20:09:55.065119   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 20:10:12.603207   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 20:10:45.439034   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 20:11:11.710247   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 20:11:18.110083   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 20:11:35.645474   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-512173 -n embed-certs-512173
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-25 20:17:35.883890987 +0000 UTC m=+6382.292825550
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512173 -n embed-certs-512173
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-512173 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-512173 logs -n 25: (2.460615281s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-120641 sudo cat                             | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo find                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo crio                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-120641                                      | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113000 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:54 UTC |
	|         | disable-driver-mounts-113000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-512173            | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-744552             | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142196  | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210442        | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-512173                 | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-744552                  | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142196       | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:07 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210442             | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:59:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:59:17.353932   72712 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:59:17.354045   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354055   72712 out.go:304] Setting ErrFile to fd 2...
	I0425 19:59:17.354059   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354269   72712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:59:17.354795   72712 out.go:298] Setting JSON to false
	I0425 19:59:17.355681   72712 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6103,"bootTime":1714069054,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:59:17.355740   72712 start.go:139] virtualization: kvm guest
	I0425 19:59:17.357921   72712 out.go:177] * [old-k8s-version-210442] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:59:17.359325   72712 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:59:17.360640   72712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:59:17.359305   72712 notify.go:220] Checking for updates...
	I0425 19:59:17.361801   72712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:59:17.363086   72712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:59:17.364512   72712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:59:17.365842   72712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:59:17.367508   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 19:59:17.367909   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.367946   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.382995   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0425 19:59:17.383362   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.383991   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.384016   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.384378   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.384566   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.386317   72712 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0425 19:59:17.387599   72712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:59:17.387904   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.387948   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.402999   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0425 19:59:17.403506   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.403962   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.403986   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.404318   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.404472   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.438308   72712 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:59:17.439686   72712 start.go:297] selected driver: kvm2
	I0425 19:59:17.439716   72712 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.439831   72712 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:59:17.440486   72712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.440553   72712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:59:17.454719   72712 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:59:17.455114   72712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:59:17.455184   72712 cni.go:84] Creating CNI manager for ""
	I0425 19:59:17.455203   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:59:17.455266   72712 start.go:340] cluster config:
	{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.455393   72712 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.457210   72712 out.go:177] * Starting "old-k8s-version-210442" primary control-plane node in "old-k8s-version-210442" cluster
	I0425 19:59:18.474583   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:17.458384   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 19:59:17.458418   72712 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:59:17.458430   72712 cache.go:56] Caching tarball of preloaded images
	I0425 19:59:17.458517   72712 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:59:17.458529   72712 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0425 19:59:17.458638   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 19:59:17.458844   72712 start.go:360] acquireMachinesLock for old-k8s-version-210442: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:59:24.554517   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:27.626446   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:33.706451   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:36.778527   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:42.858471   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:45.930403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:52.010482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:55.082403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:01.162466   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:04.234537   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:10.314506   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:13.386463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:19.466523   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:22.538461   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:28.622423   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:31.690489   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:37.770534   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:40.842458   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:46.922463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:49.994524   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:56.074478   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:59.146487   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:05.226452   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:08.298480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:14.378455   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:17.450469   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:23.530513   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:26.602470   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:32.682497   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:35.754500   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:41.834480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:44.906482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:50.986468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:54.058502   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:00.138459   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:03.210554   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:09.290491   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:12.362472   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:18.442476   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:21.514468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.599158   72220 start.go:364] duration metric: took 4m21.632012686s to acquireMachinesLock for "no-preload-744552"
	I0425 20:02:30.599206   72220 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:30.599212   72220 fix.go:54] fixHost starting: 
	I0425 20:02:30.599516   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:30.599545   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:30.614130   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0425 20:02:30.614502   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:30.614962   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:02:30.614979   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:30.615306   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:30.615513   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:30.615640   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:02:30.617129   72220 fix.go:112] recreateIfNeeded on no-preload-744552: state=Stopped err=<nil>
	I0425 20:02:30.617150   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	W0425 20:02:30.617300   72220 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:30.619253   72220 out.go:177] * Restarting existing kvm2 VM for "no-preload-744552" ...
	I0425 20:02:27.594454   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.596600   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:30.596654   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.596986   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:02:30.597016   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.597206   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:02:30.599042   71966 machine.go:97] duration metric: took 4m44.620242563s to provisionDockerMachine
	I0425 20:02:30.599079   71966 fix.go:56] duration metric: took 4m44.639860566s for fixHost
	I0425 20:02:30.599085   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 4m44.639890108s
	W0425 20:02:30.599104   71966 start.go:713] error starting host: provision: host is not running
	W0425 20:02:30.599182   71966 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0425 20:02:30.599192   71966 start.go:728] Will try again in 5 seconds ...
	I0425 20:02:30.620801   72220 main.go:141] libmachine: (no-preload-744552) Calling .Start
	I0425 20:02:30.620978   72220 main.go:141] libmachine: (no-preload-744552) Ensuring networks are active...
	I0425 20:02:30.621640   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network default is active
	I0425 20:02:30.621965   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network mk-no-preload-744552 is active
	I0425 20:02:30.622317   72220 main.go:141] libmachine: (no-preload-744552) Getting domain xml...
	I0425 20:02:30.623010   72220 main.go:141] libmachine: (no-preload-744552) Creating domain...
	I0425 20:02:31.809967   72220 main.go:141] libmachine: (no-preload-744552) Waiting to get IP...
	I0425 20:02:31.810856   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:31.811353   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:31.811403   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:31.811308   73381 retry.go:31] will retry after 294.641704ms: waiting for machine to come up
	I0425 20:02:32.107955   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.108508   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.108542   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.108449   73381 retry.go:31] will retry after 373.307428ms: waiting for machine to come up
	I0425 20:02:32.483111   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.483590   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.483619   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.483546   73381 retry.go:31] will retry after 484.455862ms: waiting for machine to come up
	I0425 20:02:32.969188   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.969657   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.969694   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.969602   73381 retry.go:31] will retry after 382.359725ms: waiting for machine to come up
	I0425 20:02:33.353143   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.353598   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.353621   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.353550   73381 retry.go:31] will retry after 515.389674ms: waiting for machine to come up
	I0425 20:02:35.602273   71966 start.go:360] acquireMachinesLock for embed-certs-512173: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 20:02:33.870172   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.870652   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.870676   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.870603   73381 retry.go:31] will retry after 714.032032ms: waiting for machine to come up
	I0425 20:02:34.586478   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:34.586833   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:34.586861   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:34.586791   73381 retry.go:31] will retry after 1.005122465s: waiting for machine to come up
	I0425 20:02:35.593962   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:35.594367   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:35.594400   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:35.594310   73381 retry.go:31] will retry after 1.483740326s: waiting for machine to come up
	I0425 20:02:37.079306   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:37.079751   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:37.079784   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:37.079700   73381 retry.go:31] will retry after 1.828802911s: waiting for machine to come up
	I0425 20:02:38.910631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:38.911138   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:38.911163   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:38.911086   73381 retry.go:31] will retry after 1.528405609s: waiting for machine to come up
	I0425 20:02:40.441741   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:40.442251   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:40.442277   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:40.442200   73381 retry.go:31] will retry after 2.817901976s: waiting for machine to come up
	I0425 20:02:43.263903   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:43.264376   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:43.264408   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:43.264324   73381 retry.go:31] will retry after 2.258888981s: waiting for machine to come up
	I0425 20:02:45.525701   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:45.526139   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:45.526168   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:45.526106   73381 retry.go:31] will retry after 4.008258204s: waiting for machine to come up
	I0425 20:02:50.951421   72304 start.go:364] duration metric: took 4m34.5614094s to acquireMachinesLock for "default-k8s-diff-port-142196"
	I0425 20:02:50.951491   72304 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:50.951500   72304 fix.go:54] fixHost starting: 
	I0425 20:02:50.951906   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:50.951944   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:50.968074   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33481
	I0425 20:02:50.968452   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:50.968862   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:02:50.968886   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:50.969238   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:50.969460   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:02:50.969622   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:02:50.971100   72304 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142196: state=Stopped err=<nil>
	I0425 20:02:50.971125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	W0425 20:02:50.971271   72304 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:50.974623   72304 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142196" ...
	I0425 20:02:50.975991   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Start
	I0425 20:02:50.976154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring networks are active...
	I0425 20:02:50.976794   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network default is active
	I0425 20:02:50.977111   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network mk-default-k8s-diff-port-142196 is active
	I0425 20:02:50.977490   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Getting domain xml...
	I0425 20:02:50.978200   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Creating domain...
	I0425 20:02:49.538522   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.538999   72220 main.go:141] libmachine: (no-preload-744552) Found IP for machine: 192.168.72.142
	I0425 20:02:49.539033   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has current primary IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.539043   72220 main.go:141] libmachine: (no-preload-744552) Reserving static IP address...
	I0425 20:02:49.539420   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.539458   72220 main.go:141] libmachine: (no-preload-744552) DBG | skip adding static IP to network mk-no-preload-744552 - found existing host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"}
	I0425 20:02:49.539469   72220 main.go:141] libmachine: (no-preload-744552) Reserved static IP address: 192.168.72.142
	I0425 20:02:49.539483   72220 main.go:141] libmachine: (no-preload-744552) Waiting for SSH to be available...
	I0425 20:02:49.539490   72220 main.go:141] libmachine: (no-preload-744552) DBG | Getting to WaitForSSH function...
	I0425 20:02:49.541631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542042   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.542073   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542221   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH client type: external
	I0425 20:02:49.542270   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa (-rw-------)
	I0425 20:02:49.542300   72220 main.go:141] libmachine: (no-preload-744552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:02:49.542316   72220 main.go:141] libmachine: (no-preload-744552) DBG | About to run SSH command:
	I0425 20:02:49.542334   72220 main.go:141] libmachine: (no-preload-744552) DBG | exit 0
	I0425 20:02:49.670034   72220 main.go:141] libmachine: (no-preload-744552) DBG | SSH cmd err, output: <nil>: 
	I0425 20:02:49.670414   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetConfigRaw
	I0425 20:02:49.671039   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:49.673279   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673592   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.673629   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673878   72220 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/config.json ...
	I0425 20:02:49.674066   72220 machine.go:94] provisionDockerMachine start ...
	I0425 20:02:49.674083   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:49.674317   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.676767   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677084   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.677115   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677238   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.677413   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677562   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677698   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.677841   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.678037   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.678049   72220 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:02:49.790734   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:02:49.790764   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791028   72220 buildroot.go:166] provisioning hostname "no-preload-744552"
	I0425 20:02:49.791061   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791248   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.793907   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794279   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.794313   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794450   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.794649   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794787   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794908   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.795054   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.795256   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.795277   72220 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-744552 && echo "no-preload-744552" | sudo tee /etc/hostname
	I0425 20:02:49.925459   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744552
	
	I0425 20:02:49.925483   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.928282   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928646   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.928680   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928831   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.929012   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929194   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929327   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.929481   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.929679   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.929709   72220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-744552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-744552/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-744552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:02:50.052805   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:50.052841   72220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:02:50.052861   72220 buildroot.go:174] setting up certificates
	I0425 20:02:50.052875   72220 provision.go:84] configureAuth start
	I0425 20:02:50.052887   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:50.053193   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.055800   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056145   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.056168   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056339   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.058090   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058395   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.058429   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058526   72220 provision.go:143] copyHostCerts
	I0425 20:02:50.058577   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:02:50.058587   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:02:50.058647   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:02:50.058742   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:02:50.058750   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:02:50.058774   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:02:50.058827   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:02:50.058834   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:02:50.058855   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:02:50.058904   72220 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.no-preload-744552 san=[127.0.0.1 192.168.72.142 localhost minikube no-preload-744552]
	I0425 20:02:50.247711   72220 provision.go:177] copyRemoteCerts
	I0425 20:02:50.247768   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:02:50.247792   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.250146   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250560   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.250600   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250780   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.250978   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.251128   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.251272   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.338105   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:02:50.365554   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 20:02:50.391433   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:02:50.416606   72220 provision.go:87] duration metric: took 363.720332ms to configureAuth
	I0425 20:02:50.416627   72220 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:02:50.416795   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:02:50.416876   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.419385   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419731   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.419764   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419903   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.420079   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420322   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420557   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.420724   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.420909   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.420929   72220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:02:50.702065   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:02:50.702104   72220 machine.go:97] duration metric: took 1.028026584s to provisionDockerMachine
	I0425 20:02:50.702117   72220 start.go:293] postStartSetup for "no-preload-744552" (driver="kvm2")
	I0425 20:02:50.702131   72220 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:02:50.702165   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.702531   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:02:50.702572   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.705595   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.705948   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.705992   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.706173   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.706367   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.706588   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.706759   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.794791   72220 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:02:50.799592   72220 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:02:50.799621   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:02:50.799701   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:02:50.799799   72220 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:02:50.799913   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:02:50.810796   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:02:50.836919   72220 start.go:296] duration metric: took 134.787005ms for postStartSetup
	I0425 20:02:50.836972   72220 fix.go:56] duration metric: took 20.237758066s for fixHost
	I0425 20:02:50.836995   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.839818   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840295   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.840325   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840429   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.840600   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840752   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840929   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.841079   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.841307   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.841338   72220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:02:50.951251   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075370.921171901
	
	I0425 20:02:50.951272   72220 fix.go:216] guest clock: 1714075370.921171901
	I0425 20:02:50.951279   72220 fix.go:229] Guest: 2024-04-25 20:02:50.921171901 +0000 UTC Remote: 2024-04-25 20:02:50.836976462 +0000 UTC m=+282.018789867 (delta=84.195439ms)
	I0425 20:02:50.951312   72220 fix.go:200] guest clock delta is within tolerance: 84.195439ms
	I0425 20:02:50.951321   72220 start.go:83] releasing machines lock for "no-preload-744552", held for 20.352126868s
	I0425 20:02:50.951348   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.951612   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.954231   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954614   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.954638   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954821   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955240   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955419   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955492   72220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:02:50.955540   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.955659   72220 ssh_runner.go:195] Run: cat /version.json
	I0425 20:02:50.955688   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.958155   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958476   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958517   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958541   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958661   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.958808   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.958903   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958932   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.958935   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.959045   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.959181   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.959192   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.959360   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.959471   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:51.066809   72220 ssh_runner.go:195] Run: systemctl --version
	I0425 20:02:51.073198   72220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:02:51.228547   72220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:02:51.236443   72220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:02:51.236518   72220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:02:51.256226   72220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:02:51.256244   72220 start.go:494] detecting cgroup driver to use...
	I0425 20:02:51.256307   72220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:02:51.278596   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:02:51.295692   72220 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:02:51.295751   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:02:51.310940   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:02:51.326072   72220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:02:51.459064   72220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:02:51.614563   72220 docker.go:233] disabling docker service ...
	I0425 20:02:51.614639   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:02:51.638817   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:02:51.658265   72220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:02:51.818412   72220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:02:51.943830   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:02:51.960672   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:02:51.982028   72220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:02:51.982090   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:51.994990   72220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:02:51.995079   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.007907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.020225   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.033306   72220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:02:52.046241   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.058282   72220 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.078907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.090258   72220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:02:52.100796   72220 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:02:52.100873   72220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:02:52.115600   72220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:02:52.125458   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:02:52.288142   72220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:02:52.430252   72220 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:02:52.430353   72220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:02:52.436493   72220 start.go:562] Will wait 60s for crictl version
	I0425 20:02:52.436565   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.441427   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:02:52.479709   72220 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:02:52.479810   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.512180   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.545115   72220 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:02:52.546476   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:52.549314   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549723   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:52.549759   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549926   72220 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0425 20:02:52.554924   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:02:52.568804   72220 kubeadm.go:877] updating cluster {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:02:52.568958   72220 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:02:52.568997   72220 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:02:52.609095   72220 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:02:52.609117   72220 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:02:52.609156   72220 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.609188   72220 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.609185   72220 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.609214   72220 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.609227   72220 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.609256   72220 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.609334   72220 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.609370   72220 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610726   72220 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.610747   72220 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610772   72220 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.610724   72220 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.610800   72220 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.610807   72220 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.611075   72220 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.611096   72220 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.753069   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0425 20:02:52.771762   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.825052   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908030   72220 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0425 20:02:52.908082   72220 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.908113   72220 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0425 20:02:52.908127   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.908135   72220 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908164   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.915126   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.915132   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.967834   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.969385   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.973718   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0425 20:02:52.973787   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0425 20:02:52.973823   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:52.973870   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:52.985763   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.986695   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.068153   72220 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0425 20:02:53.068196   72220 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.068269   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099237   72220 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0425 20:02:53.099257   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0425 20:02:53.099274   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099290   72220 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:53.099294   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0425 20:02:53.099330   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099368   72220 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0425 20:02:53.099401   72220 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:53.099433   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099333   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.115478   72220 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0425 20:02:53.115523   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.115526   72220 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.115610   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.550328   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.240552   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting to get IP...
	I0425 20:02:52.241327   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241657   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241757   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.241648   73527 retry.go:31] will retry after 195.006273ms: waiting for machine to come up
	I0425 20:02:52.438154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438702   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438726   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.438657   73527 retry.go:31] will retry after 365.911905ms: waiting for machine to come up
	I0425 20:02:52.806281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806793   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806826   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.806727   73527 retry.go:31] will retry after 448.572137ms: waiting for machine to come up
	I0425 20:02:53.257396   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257935   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257966   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.257889   73527 retry.go:31] will retry after 560.886917ms: waiting for machine to come up
	I0425 20:02:53.820527   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820954   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820979   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.820915   73527 retry.go:31] will retry after 514.294303ms: waiting for machine to come up
	I0425 20:02:54.336706   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337129   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:54.337101   73527 retry.go:31] will retry after 853.040726ms: waiting for machine to come up
	I0425 20:02:55.192349   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192857   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:55.192774   73527 retry.go:31] will retry after 1.17554782s: waiting for machine to come up
	I0425 20:02:56.232794   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.133436829s)
	I0425 20:02:56.232845   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0425 20:02:56.232854   72220 ssh_runner.go:235] Completed: which crictl: (3.133373607s)
	I0425 20:02:56.232875   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232915   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232961   72220 ssh_runner.go:235] Completed: which crictl: (3.133515676s)
	I0425 20:02:56.232919   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:56.233011   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:56.233050   72220 ssh_runner.go:235] Completed: which crictl: (3.11742497s)
	I0425 20:02:56.233089   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:56.233126   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (3.117580594s)
	I0425 20:02:56.233160   72220 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.6828061s)
	I0425 20:02:56.233167   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0425 20:02:56.233207   72220 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0425 20:02:56.233242   72220 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:56.233248   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:56.233284   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:56.323764   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0425 20:02:56.323884   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:02:56.323906   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0425 20:02:56.323989   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:02:58.553707   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.320762887s)
	I0425 20:02:58.553742   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0425 20:02:58.553768   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.320739179s)
	I0425 20:02:58.553784   72220 ssh_runner.go:235] Completed: which crictl: (2.320487912s)
	I0425 20:02:58.553807   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0425 20:02:58.553838   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:58.553864   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.320587538s)
	I0425 20:02:58.553889   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:02:58.553909   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0425 20:02:58.553948   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.229944417s)
	I0425 20:02:58.553959   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553989   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0425 20:02:58.554009   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553910   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.23000183s)
	I0425 20:02:58.554069   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0425 20:02:58.602692   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0425 20:02:58.602694   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0425 20:02:58.602819   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:02:56.369693   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370169   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:56.370115   73527 retry.go:31] will retry after 1.260629487s: waiting for machine to come up
	I0425 20:02:57.632705   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633187   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:57.633150   73527 retry.go:31] will retry after 1.291948113s: waiting for machine to come up
	I0425 20:02:58.926675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927167   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927196   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:58.927111   73527 retry.go:31] will retry after 1.869565597s: waiting for machine to come up
	I0425 20:03:00.799357   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799820   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799850   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:00.799750   73527 retry.go:31] will retry after 2.157801293s: waiting for machine to come up
	I0425 20:03:00.027830   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.473790165s)
	I0425 20:03:00.027869   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0425 20:03:00.027895   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027943   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027842   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.424998268s)
	I0425 20:03:00.027985   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0425 20:03:02.204218   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.176247608s)
	I0425 20:03:02.204254   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0425 20:03:02.204290   72220 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.204335   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.959407   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959789   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959812   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:02.959745   73527 retry.go:31] will retry after 2.617480271s: waiting for machine to come up
	I0425 20:03:05.579300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:05.579775   73527 retry.go:31] will retry after 4.058370199s: waiting for machine to come up
	I0425 20:03:06.132743   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.928385447s)
	I0425 20:03:06.132779   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0425 20:03:06.132805   72220 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:06.132857   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:08.314803   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.181910584s)
	I0425 20:03:08.314842   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0425 20:03:08.314881   72220 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:08.314930   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:11.255486   72712 start.go:364] duration metric: took 3m53.796595105s to acquireMachinesLock for "old-k8s-version-210442"
	I0425 20:03:11.255550   72712 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:11.255569   72712 fix.go:54] fixHost starting: 
	I0425 20:03:11.256083   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:11.256128   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:11.272950   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0425 20:03:11.273365   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:11.273878   72712 main.go:141] libmachine: Using API Version  1
	I0425 20:03:11.273907   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:11.274277   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:11.274487   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:11.274666   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetState
	I0425 20:03:11.276420   72712 fix.go:112] recreateIfNeeded on old-k8s-version-210442: state=Stopped err=<nil>
	I0425 20:03:11.276454   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	W0425 20:03:11.276608   72712 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:11.279156   72712 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210442" ...
	I0425 20:03:09.639300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639833   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Found IP for machine: 192.168.39.123
	I0425 20:03:09.639867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has current primary IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserving static IP address...
	I0425 20:03:09.640257   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.640281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | skip adding static IP to network mk-default-k8s-diff-port-142196 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"}
	I0425 20:03:09.640300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserved static IP address: 192.168.39.123
	I0425 20:03:09.640313   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for SSH to be available...
	I0425 20:03:09.640321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Getting to WaitForSSH function...
	I0425 20:03:09.643058   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643371   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.643400   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643506   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH client type: external
	I0425 20:03:09.643557   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa (-rw-------)
	I0425 20:03:09.643586   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:09.643609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | About to run SSH command:
	I0425 20:03:09.643618   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | exit 0
	I0425 20:03:09.766707   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:09.767091   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetConfigRaw
	I0425 20:03:09.767818   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:09.770573   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771012   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.771047   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771296   72304 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/config.json ...
	I0425 20:03:09.771580   72304 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:09.771609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:09.771884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.774255   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.774699   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774866   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.775044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775213   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775362   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.775520   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.775781   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.775797   72304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:09.884259   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:09.884288   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884519   72304 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142196"
	I0425 20:03:09.884547   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884747   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.887391   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.887798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.887829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.888003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.888215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888542   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.888703   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.888918   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.888934   72304 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142196 && echo "default-k8s-diff-port-142196" | sudo tee /etc/hostname
	I0425 20:03:10.015919   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142196
	
	I0425 20:03:10.015951   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.018640   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.018955   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.018987   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.019201   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.019398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019729   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.019906   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.020098   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.020120   72304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142196' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142196/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142196' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:10.145789   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:10.145822   72304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:10.145873   72304 buildroot.go:174] setting up certificates
	I0425 20:03:10.145886   72304 provision.go:84] configureAuth start
	I0425 20:03:10.145899   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:10.146185   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:10.148943   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149309   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.149342   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149492   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.152000   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152418   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.152445   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152621   72304 provision.go:143] copyHostCerts
	I0425 20:03:10.152681   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:10.152693   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:10.152758   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:10.152890   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:10.152905   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:10.152940   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:10.153033   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:10.153044   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:10.153072   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:10.153145   72304 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142196 san=[127.0.0.1 192.168.39.123 default-k8s-diff-port-142196 localhost minikube]
	I0425 20:03:10.572412   72304 provision.go:177] copyRemoteCerts
	I0425 20:03:10.572473   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:10.572496   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.575083   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.575421   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.575696   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.575799   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.575916   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:10.657850   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:10.685493   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0425 20:03:10.713230   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:10.740577   72304 provision.go:87] duration metric: took 594.674196ms to configureAuth
	I0425 20:03:10.740604   72304 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:10.740835   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:10.740916   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.743709   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744039   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.744071   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744236   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.744434   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744621   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744723   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.744901   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.745065   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.745083   72304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:11.017816   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:11.017844   72304 machine.go:97] duration metric: took 1.24624593s to provisionDockerMachine
	I0425 20:03:11.017858   72304 start.go:293] postStartSetup for "default-k8s-diff-port-142196" (driver="kvm2")
	I0425 20:03:11.017871   72304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:11.017892   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.018195   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:11.018231   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.020759   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021067   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.021092   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.021403   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.021600   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.021729   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.106290   72304 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:11.111532   72304 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:11.111560   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:11.111645   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:11.111744   72304 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:11.111856   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:11.122216   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:11.150472   72304 start.go:296] duration metric: took 132.600197ms for postStartSetup
	I0425 20:03:11.150520   72304 fix.go:56] duration metric: took 20.199020729s for fixHost
	I0425 20:03:11.150544   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.153466   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.153798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.153824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.154055   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.154289   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154483   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154635   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.154824   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:11.154991   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:11.155001   72304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:11.255330   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075391.221756501
	
	I0425 20:03:11.255357   72304 fix.go:216] guest clock: 1714075391.221756501
	I0425 20:03:11.255365   72304 fix.go:229] Guest: 2024-04-25 20:03:11.221756501 +0000 UTC Remote: 2024-04-25 20:03:11.15052524 +0000 UTC m=+294.908822896 (delta=71.231261ms)
	I0425 20:03:11.255384   72304 fix.go:200] guest clock delta is within tolerance: 71.231261ms
	I0425 20:03:11.255388   72304 start.go:83] releasing machines lock for "default-k8s-diff-port-142196", held for 20.303917474s
	I0425 20:03:11.255419   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.255700   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:11.258740   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259076   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.259104   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259414   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.259906   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260102   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260197   72304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:11.260241   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.260350   72304 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:11.260374   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.262843   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263001   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263216   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263245   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263365   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263480   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263669   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263679   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263864   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264026   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264039   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.264203   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.280701   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .Start
	I0425 20:03:11.280895   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring networks are active...
	I0425 20:03:11.281729   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network default is active
	I0425 20:03:11.282158   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network mk-old-k8s-version-210442 is active
	I0425 20:03:11.282639   72712 main.go:141] libmachine: (old-k8s-version-210442) Getting domain xml...
	I0425 20:03:11.283399   72712 main.go:141] libmachine: (old-k8s-version-210442) Creating domain...
	I0425 20:03:11.339564   72304 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:11.364667   72304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:11.526308   72304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:11.533487   72304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:11.533563   72304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:11.552090   72304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:11.552120   72304 start.go:494] detecting cgroup driver to use...
	I0425 20:03:11.552196   72304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:11.569573   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:11.584425   72304 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:11.584489   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:11.599083   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:11.613739   72304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:11.739574   72304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:11.911318   72304 docker.go:233] disabling docker service ...
	I0425 20:03:11.911390   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:11.928743   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:11.946101   72304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:12.112740   72304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:12.246863   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:12.269551   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:12.298838   72304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:12.298907   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.312059   72304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:12.312113   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.324076   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.336239   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.350088   72304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:12.368362   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.385406   72304 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.407195   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.420065   72304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:12.431195   72304 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:12.431260   72304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:12.446263   72304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:12.457137   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:12.622756   72304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:12.799932   72304 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:12.800012   72304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:12.807795   72304 start.go:562] Will wait 60s for crictl version
	I0425 20:03:12.807862   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:03:12.813860   72304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:12.861249   72304 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:12.861327   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.896140   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.942768   72304 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:09.079550   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0425 20:03:09.079607   72220 cache_images.go:123] Successfully loaded all cached images
	I0425 20:03:09.079615   72220 cache_images.go:92] duration metric: took 16.470485982s to LoadCachedImages
	I0425 20:03:09.079629   72220 kubeadm.go:928] updating node { 192.168.72.142 8443 v1.30.0 crio true true} ...
	I0425 20:03:09.079764   72220 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-744552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:09.079839   72220 ssh_runner.go:195] Run: crio config
	I0425 20:03:09.139170   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:09.139194   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:09.139206   72220 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:09.139225   72220 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.142 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-744552 NodeName:no-preload-744552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:09.139365   72220 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-744552"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:09.139426   72220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:09.151828   72220 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:09.151884   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:09.163310   72220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0425 20:03:09.183132   72220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:09.203038   72220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0425 20:03:09.223717   72220 ssh_runner.go:195] Run: grep 192.168.72.142	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:09.228467   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:09.243976   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:09.361475   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:09.380862   72220 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552 for IP: 192.168.72.142
	I0425 20:03:09.380886   72220 certs.go:194] generating shared ca certs ...
	I0425 20:03:09.380901   72220 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:09.381076   72220 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:09.381132   72220 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:09.381147   72220 certs.go:256] generating profile certs ...
	I0425 20:03:09.381254   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/client.key
	I0425 20:03:09.381337   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key.a705cb96
	I0425 20:03:09.381392   72220 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key
	I0425 20:03:09.381538   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:09.381586   72220 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:09.381601   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:09.381638   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:09.381668   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:09.381702   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:09.381761   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:09.382459   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:09.423895   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:09.462481   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:09.491394   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:09.532779   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 20:03:09.569107   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 20:03:09.597381   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:09.623962   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:09.651141   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:09.677295   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:09.702404   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:09.729275   72220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:09.748421   72220 ssh_runner.go:195] Run: openssl version
	I0425 20:03:09.754848   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:09.768121   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774468   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774529   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.783568   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:09.799120   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:09.812983   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818660   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818740   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.826091   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:09.840115   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:09.853372   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858387   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858455   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.864693   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:09.876755   72220 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:09.882829   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:09.890219   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:09.897091   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:09.906017   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:09.913154   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:09.919989   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:09.926552   72220 kubeadm.go:391] StartCluster: {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:09.926671   72220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:09.926734   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:09.971983   72220 cri.go:89] found id: ""
	I0425 20:03:09.972071   72220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:09.983371   72220 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:09.983399   72220 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:09.983406   72220 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:09.983451   72220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:09.994047   72220 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:09.995080   72220 kubeconfig.go:125] found "no-preload-744552" server: "https://192.168.72.142:8443"
	I0425 20:03:09.997202   72220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:10.007666   72220 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.142
	I0425 20:03:10.007703   72220 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:10.007713   72220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:10.007752   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:10.049581   72220 cri.go:89] found id: ""
	I0425 20:03:10.049679   72220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:10.071032   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:10.083240   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:10.083267   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:10.083314   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:10.093444   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:10.093507   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:10.104291   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:10.114596   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:10.114659   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:10.125118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.138299   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:10.138362   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.152185   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:10.163493   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:10.163555   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:10.177214   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:10.188286   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:10.312536   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.497483   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.184911769s)
	I0425 20:03:11.497531   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.753732   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.871246   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.968366   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:11.968445   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.468885   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.968598   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:13.037502   72220 api_server.go:72] duration metric: took 1.069135698s to wait for apiserver process to appear ...
	I0425 20:03:13.037542   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:13.037568   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:13.038540   72220 api_server.go:269] stopped: https://192.168.72.142:8443/healthz: Get "https://192.168.72.142:8443/healthz": dial tcp 192.168.72.142:8443: connect: connection refused
	I0425 20:03:13.537713   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:12.944206   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:12.947412   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.947822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:12.947852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.948086   72304 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:12.953504   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:12.969171   72304 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:12.969344   72304 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:12.969402   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:13.016509   72304 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:13.016585   72304 ssh_runner.go:195] Run: which lz4
	I0425 20:03:13.022023   72304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:13.027861   72304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:13.027896   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:14.913405   72304 crio.go:462] duration metric: took 1.891428846s to copy over tarball
	I0425 20:03:14.913466   72304 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:12.659136   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting to get IP...
	I0425 20:03:12.660227   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.660770   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.660843   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.660724   73691 retry.go:31] will retry after 234.96602ms: waiting for machine to come up
	I0425 20:03:12.897395   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.897966   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.897993   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.897913   73691 retry.go:31] will retry after 387.692223ms: waiting for machine to come up
	I0425 20:03:13.287742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.288414   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.288443   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.288397   73691 retry.go:31] will retry after 461.897892ms: waiting for machine to come up
	I0425 20:03:13.752061   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.752574   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.752603   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.752513   73691 retry.go:31] will retry after 452.347315ms: waiting for machine to come up
	I0425 20:03:14.206275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.206684   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.206708   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.206629   73691 retry.go:31] will retry after 466.12355ms: waiting for machine to come up
	I0425 20:03:14.674265   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.674788   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.674818   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.674735   73691 retry.go:31] will retry after 697.70071ms: waiting for machine to come up
	I0425 20:03:15.373862   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:15.374297   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:15.374325   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:15.374252   73691 retry.go:31] will retry after 835.73273ms: waiting for machine to come up
	I0425 20:03:16.211394   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:16.211870   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:16.211902   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:16.211815   73691 retry.go:31] will retry after 1.26739043s: waiting for machine to come up
	I0425 20:03:16.441793   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.441829   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.441848   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.506023   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.506057   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.538293   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.544891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:16.544925   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.038519   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.049842   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.049883   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.538420   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.545891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.545929   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:18.038192   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:18.042957   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:03:18.063131   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:18.063171   72220 api_server.go:131] duration metric: took 5.025619242s to wait for apiserver health ...
	I0425 20:03:18.063182   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:18.063192   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:18.405047   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:18.552639   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:18.565507   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:18.591534   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:17.662135   72304 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.748640149s)
	I0425 20:03:17.662171   72304 crio.go:469] duration metric: took 2.748741671s to extract the tarball
	I0425 20:03:17.662184   72304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:17.706288   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:17.773537   72304 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:03:17.773565   72304 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:03:17.773575   72304 kubeadm.go:928] updating node { 192.168.39.123 8444 v1.30.0 crio true true} ...
	I0425 20:03:17.773709   72304 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142196 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:17.773799   72304 ssh_runner.go:195] Run: crio config
	I0425 20:03:17.836354   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:17.836379   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:17.836391   72304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:17.836411   72304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142196 NodeName:default-k8s-diff-port-142196 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:17.836545   72304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142196"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:17.836599   72304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:17.848441   72304 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:17.848506   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:17.860320   72304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0425 20:03:17.885528   72304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:17.905701   72304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0425 20:03:17.925064   72304 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:17.930085   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:17.944507   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:18.108208   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:18.134428   72304 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196 for IP: 192.168.39.123
	I0425 20:03:18.134456   72304 certs.go:194] generating shared ca certs ...
	I0425 20:03:18.134479   72304 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:18.134672   72304 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:18.134745   72304 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:18.134761   72304 certs.go:256] generating profile certs ...
	I0425 20:03:18.134870   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/client.key
	I0425 20:03:18.245553   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key.1fb61bcb
	I0425 20:03:18.245666   72304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key
	I0425 20:03:18.245833   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:18.245880   72304 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:18.245894   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:18.245934   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:18.245964   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:18.245997   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:18.246058   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:18.246994   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:18.293000   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:18.322296   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:18.358060   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:18.390999   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0425 20:03:18.420333   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:18.450050   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:18.477983   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:18.506030   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:18.538394   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:18.574361   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:18.610827   72304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:18.634141   72304 ssh_runner.go:195] Run: openssl version
	I0425 20:03:18.640647   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:18.653988   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659400   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659458   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.665868   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:18.679247   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:18.692272   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697356   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697410   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.703694   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:18.716412   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:18.733362   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739598   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739651   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.748175   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:18.764492   72304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:18.770594   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:18.777414   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:18.784614   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:18.793453   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:18.800721   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:18.807982   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:18.814836   72304 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:18.814942   72304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:18.814992   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.864771   72304 cri.go:89] found id: ""
	I0425 20:03:18.864834   72304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:18.878200   72304 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:18.878238   72304 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:18.878245   72304 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:18.878305   72304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:18.892071   72304 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:18.892973   72304 kubeconfig.go:125] found "default-k8s-diff-port-142196" server: "https://192.168.39.123:8444"
	I0425 20:03:18.894860   72304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:18.907959   72304 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.123
	I0425 20:03:18.907989   72304 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:18.907998   72304 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:18.908045   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.951245   72304 cri.go:89] found id: ""
	I0425 20:03:18.951311   72304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:18.980033   72304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:18.995453   72304 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:18.995473   72304 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:18.995524   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0425 20:03:19.007409   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:19.007470   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:19.019782   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0425 20:03:19.031410   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:19.031493   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:19.043439   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.055936   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:19.055999   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.067986   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0425 20:03:19.080785   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:19.080869   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:19.092802   72304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:19.105024   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.240077   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.259510   72304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.019382485s)
	I0425 20:03:20.259544   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.489833   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.599319   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.784451   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:20.784606   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.284759   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:17.480654   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:17.481045   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:17.481094   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:17.481007   73691 retry.go:31] will retry after 1.238487953s: waiting for machine to come up
	I0425 20:03:18.720512   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:18.720940   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:18.720965   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:18.720902   73691 retry.go:31] will retry after 2.277078909s: waiting for machine to come up
	I0425 20:03:20.999749   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:21.000275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:21.000305   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:21.000223   73691 retry.go:31] will retry after 2.81059851s: waiting for machine to come up
	I0425 20:03:18.940880   72220 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:18.983894   72220 system_pods.go:61] "coredns-7db6d8ff4d-67sp6" [0fc3ee18-e3fe-4f4a-a5bd-4d6e3497bfa3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:18.983953   72220 system_pods.go:61] "etcd-no-preload-744552" [f3768d08-4cc6-42aa-9d1c-b0fd5d6ffed5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:18.983975   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [9d927e1f-4ddb-4b54-b1f1-f5248cb51745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:18.983984   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [cc71ce6c-22ba-4189-99dc-dd2da6506d37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:18.983993   72220 system_pods.go:61] "kube-proxy-whkbk" [a22b51a9-4854-41f5-bb5a-a81920a09b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0425 20:03:18.984026   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [5f01cd76-d6b7-4033-9aa9-38cac91965d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:18.984037   72220 system_pods.go:61] "metrics-server-569cc877fc-6n2gd" [03283a78-d44f-4f60-9743-680c18aeace3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:18.984052   72220 system_pods.go:61] "storage-provisioner" [4211811e-85ce-4da2-bc16-16909c26ced7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0425 20:03:18.984064   72220 system_pods.go:74] duration metric: took 392.509163ms to wait for pod list to return data ...
	I0425 20:03:18.984077   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:18.989373   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:18.989405   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:18.989424   72220 node_conditions.go:105] duration metric: took 5.341625ms to run NodePressure ...
	I0425 20:03:18.989446   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.809313   72220 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818730   72220 kubeadm.go:733] kubelet initialised
	I0425 20:03:19.818753   72220 kubeadm.go:734] duration metric: took 9.41696ms waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818761   72220 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:19.825762   72220 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:21.834658   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:21.785434   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.855046   72304 api_server.go:72] duration metric: took 1.070594042s to wait for apiserver process to appear ...
	I0425 20:03:21.855127   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:21.855156   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:21.855709   72304 api_server.go:269] stopped: https://192.168.39.123:8444/healthz: Get "https://192.168.39.123:8444/healthz": dial tcp 192.168.39.123:8444: connect: connection refused
	I0425 20:03:22.355555   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.430068   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.430099   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.430115   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.487089   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.487124   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.855301   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.861270   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:24.861299   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.356007   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.360802   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.360839   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.855336   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.861719   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.861753   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:23.812963   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:23.813457   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:23.813476   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:23.813429   73691 retry.go:31] will retry after 2.508562986s: waiting for machine to come up
	I0425 20:03:26.323267   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:26.323733   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:26.323761   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:26.323699   73691 retry.go:31] will retry after 4.475703543s: waiting for machine to come up
	I0425 20:03:26.355254   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.360977   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.361011   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:26.855547   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.860178   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.860203   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.355819   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.360466   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:27.360491   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.856219   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.861706   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:03:27.868486   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:27.868525   72304 api_server.go:131] duration metric: took 6.013385579s to wait for apiserver health ...
	I0425 20:03:27.868536   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:27.868544   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:27.870534   72304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:24.335382   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:24.335415   72220 pod_ready.go:81] duration metric: took 4.509621487s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:24.335427   72220 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:26.342530   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:28.841444   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:27.871863   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:27.885767   72304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:27.910270   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:27.922984   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:27.923016   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:27.923024   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:27.923030   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:27.923036   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:27.923041   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:03:27.923052   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:27.923057   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:27.923061   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:03:27.923067   72304 system_pods.go:74] duration metric: took 12.774358ms to wait for pod list to return data ...
	I0425 20:03:27.923073   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:27.927553   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:27.927582   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:27.927596   72304 node_conditions.go:105] duration metric: took 4.517775ms to run NodePressure ...
	I0425 20:03:27.927616   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:28.213013   72304 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217836   72304 kubeadm.go:733] kubelet initialised
	I0425 20:03:28.217860   72304 kubeadm.go:734] duration metric: took 4.809ms waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217869   72304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:28.225122   72304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.229920   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229940   72304 pod_ready.go:81] duration metric: took 4.794976ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.229948   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229954   72304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.234362   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234380   72304 pod_ready.go:81] duration metric: took 4.417955ms for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.234388   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234394   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.238885   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238904   72304 pod_ready.go:81] duration metric: took 4.504378ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.238917   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238924   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.314420   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314446   72304 pod_ready.go:81] duration metric: took 75.511589ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.314457   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314464   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.714128   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714165   72304 pod_ready.go:81] duration metric: took 399.694231ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.714178   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714187   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.113925   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113958   72304 pod_ready.go:81] duration metric: took 399.760651ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.113971   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113977   72304 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.514107   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514132   72304 pod_ready.go:81] duration metric: took 400.147308ms for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.514142   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514149   72304 pod_ready.go:38] duration metric: took 1.296270699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:29.514167   72304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:03:29.528766   72304 ops.go:34] apiserver oom_adj: -16
	I0425 20:03:29.528791   72304 kubeadm.go:591] duration metric: took 10.650540723s to restartPrimaryControlPlane
	I0425 20:03:29.528801   72304 kubeadm.go:393] duration metric: took 10.713975851s to StartCluster
	I0425 20:03:29.528816   72304 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.528887   72304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:29.530674   72304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.530951   72304 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:03:29.532792   72304 out.go:177] * Verifying Kubernetes components...
	I0425 20:03:29.531039   72304 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:03:29.531203   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:29.534328   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:29.534349   72304 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534377   72304 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534383   72304 addons.go:243] addon metrics-server should already be in state true
	I0425 20:03:29.534331   72304 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534416   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534441   72304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142196"
	I0425 20:03:29.534334   72304 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534536   72304 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534549   72304 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:03:29.534584   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534786   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534814   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534839   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534815   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534956   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.535000   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.551165   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0425 20:03:29.551680   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552007   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0425 20:03:29.552399   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.552419   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.552445   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552864   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553003   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.553028   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.553066   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.553409   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553621   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0425 20:03:29.554006   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.554024   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.554057   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.554555   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.554579   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.554908   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.555432   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.555487   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.557216   72304 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.557238   72304 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:03:29.557267   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.557642   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.557675   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.570559   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0425 20:03:29.571013   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.571538   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.571562   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.571944   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.572152   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.574003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.576061   72304 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:03:29.575108   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33777
	I0425 20:03:29.575580   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0425 20:03:29.577356   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:03:29.577374   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:03:29.577394   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.577861   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.577964   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.578333   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578356   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578514   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578543   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578735   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578909   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578947   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.579603   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.579633   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.580871   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.582436   72304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:29.581297   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.581851   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.583941   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.583971   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.583994   72304 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.584021   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:03:29.584031   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.584044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.584282   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.584430   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.586538   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.586880   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.586901   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.587119   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.587314   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.587470   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.587560   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.595882   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0425 20:03:29.596234   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.596711   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.596728   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.597146   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.597321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.598599   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.598799   72304 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:29.598811   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:03:29.598822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.600829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.601149   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.601409   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.601479   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.601537   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.772228   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:29.799159   72304 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:29.893622   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:03:29.893647   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:03:29.895090   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.919651   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:03:29.919673   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:03:29.929992   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:30.004488   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:30.004519   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:03:30.061525   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.113425632s)
	I0425 20:03:31.043511   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.148338843s)
	I0425 20:03:31.043539   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043587   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043524   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043629   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043894   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043910   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043946   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.043953   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043964   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043973   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043992   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044107   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044159   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044199   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044209   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044219   72304 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-142196"
	I0425 20:03:31.044216   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044237   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044253   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044262   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044542   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044566   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044662   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044682   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.052429   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.052451   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.052675   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.052694   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.055680   72304 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0425 20:03:31.057271   72304 addons.go:505] duration metric: took 1.526243989s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0425 20:03:32.187768   71966 start.go:364] duration metric: took 56.585448027s to acquireMachinesLock for "embed-certs-512173"
	I0425 20:03:32.187838   71966 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:32.187849   71966 fix.go:54] fixHost starting: 
	I0425 20:03:32.188220   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:32.188266   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:32.207172   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0425 20:03:32.207627   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:32.208170   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:03:32.208196   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:32.208493   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:32.208700   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:32.208837   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:03:32.210552   71966 fix.go:112] recreateIfNeeded on embed-certs-512173: state=Stopped err=<nil>
	I0425 20:03:32.210577   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	W0425 20:03:32.210741   71966 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:32.213400   71966 out.go:177] * Restarting existing kvm2 VM for "embed-certs-512173" ...
	I0425 20:03:30.803467   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804014   72712 main.go:141] libmachine: (old-k8s-version-210442) Found IP for machine: 192.168.61.136
	I0425 20:03:30.804041   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserving static IP address...
	I0425 20:03:30.804057   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has current primary IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804495   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.804535   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | skip adding static IP to network mk-old-k8s-version-210442 - found existing host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"}
	I0425 20:03:30.804562   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserved static IP address: 192.168.61.136
	I0425 20:03:30.804582   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting for SSH to be available...
	I0425 20:03:30.804599   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Getting to WaitForSSH function...
	I0425 20:03:30.807110   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807533   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.807556   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807706   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH client type: external
	I0425 20:03:30.807725   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa (-rw-------)
	I0425 20:03:30.807767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:30.807783   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | About to run SSH command:
	I0425 20:03:30.807815   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | exit 0
	I0425 20:03:30.935091   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:30.935445   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetConfigRaw
	I0425 20:03:30.936168   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:30.938767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939193   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.939246   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939428   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 20:03:30.939630   72712 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:30.939649   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:30.939870   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:30.942320   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.942771   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942923   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:30.943113   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943306   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943468   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:30.943640   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:30.943842   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:30.943854   72712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:31.052598   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:31.052625   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.052821   72712 buildroot.go:166] provisioning hostname "old-k8s-version-210442"
	I0425 20:03:31.052844   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.053080   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.056324   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056713   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.056745   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056885   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.057056   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057190   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057375   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.057549   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.057724   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.057742   72712 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210442 && echo "old-k8s-version-210442" | sudo tee /etc/hostname
	I0425 20:03:31.188461   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210442
	
	I0425 20:03:31.188494   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.191628   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192088   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.192117   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192332   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.192519   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192655   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192767   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.192944   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.193142   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.193167   72712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:31.317374   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:31.317402   72712 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:31.317436   72712 buildroot.go:174] setting up certificates
	I0425 20:03:31.317447   72712 provision.go:84] configureAuth start
	I0425 20:03:31.317461   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.317778   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:31.321012   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321388   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.321421   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321698   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.323976   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324326   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.324354   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324523   72712 provision.go:143] copyHostCerts
	I0425 20:03:31.324573   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:31.324584   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:31.324656   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:31.324764   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:31.324778   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:31.324807   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:31.324879   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:31.324890   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:31.324915   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:31.324978   72712 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210442 san=[127.0.0.1 192.168.61.136 localhost minikube old-k8s-version-210442]
	I0425 20:03:31.410674   72712 provision.go:177] copyRemoteCerts
	I0425 20:03:31.410728   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:31.410755   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.413170   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413449   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.413491   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413634   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.413832   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.413988   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.414156   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:31.502759   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:31.536662   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0425 20:03:31.565106   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:31.593254   72712 provision.go:87] duration metric: took 275.793443ms to configureAuth
	I0425 20:03:31.593287   72712 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:31.593621   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 20:03:31.593720   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.596515   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.596827   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.596859   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.597057   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.597287   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597448   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597624   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.597775   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.597927   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.597942   72712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:31.925149   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:31.925182   72712 machine.go:97] duration metric: took 985.540626ms to provisionDockerMachine
	I0425 20:03:31.925199   72712 start.go:293] postStartSetup for "old-k8s-version-210442" (driver="kvm2")
	I0425 20:03:31.925211   72712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:31.925258   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:31.925560   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:31.925596   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.928532   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.928982   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.929013   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.929232   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.929458   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.929637   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.929787   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.023009   72712 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:32.029391   72712 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:32.029426   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:32.029508   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:32.029576   72712 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:32.029664   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:32.046596   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:32.077323   72712 start.go:296] duration metric: took 152.112632ms for postStartSetup
	I0425 20:03:32.077396   72712 fix.go:56] duration metric: took 20.821829703s for fixHost
	I0425 20:03:32.077425   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.080136   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080477   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.080526   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080636   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.080836   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081067   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081283   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.081493   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:32.081695   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:32.081711   72712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:32.187617   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075412.163072845
	
	I0425 20:03:32.187642   72712 fix.go:216] guest clock: 1714075412.163072845
	I0425 20:03:32.187652   72712 fix.go:229] Guest: 2024-04-25 20:03:32.163072845 +0000 UTC Remote: 2024-04-25 20:03:32.07740605 +0000 UTC m=+254.767943919 (delta=85.666795ms)
	I0425 20:03:32.187675   72712 fix.go:200] guest clock delta is within tolerance: 85.666795ms
	I0425 20:03:32.187682   72712 start.go:83] releasing machines lock for "old-k8s-version-210442", held for 20.932154384s
	I0425 20:03:32.187709   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.187998   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:32.190538   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.190898   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.190932   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.191077   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191817   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191996   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.192076   72712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:32.192116   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.192208   72712 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:32.192230   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.194821   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.194988   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195191   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195212   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195334   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195368   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195500   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195673   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195677   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195847   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195866   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196063   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.196083   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196219   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.276462   72712 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:32.300979   72712 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:30.842282   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:32.843750   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.843779   72220 pod_ready.go:81] duration metric: took 8.508343704s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.843791   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850293   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.850316   72220 pod_ready.go:81] duration metric: took 6.517764ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850327   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855621   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.855657   72220 pod_ready.go:81] duration metric: took 5.31225ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855671   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860450   72220 pod_ready.go:92] pod "kube-proxy-whkbk" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.860483   72220 pod_ready.go:81] duration metric: took 4.797706ms for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860505   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865268   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.865286   72220 pod_ready.go:81] duration metric: took 4.774354ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865294   72220 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.458446   72712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:32.465434   72712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:32.465518   72712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:32.486929   72712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:32.486954   72712 start.go:494] detecting cgroup driver to use...
	I0425 20:03:32.487019   72712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:32.509425   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:32.530999   72712 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:32.531059   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:32.547280   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:32.563594   72712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:32.699207   72712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:32.875013   72712 docker.go:233] disabling docker service ...
	I0425 20:03:32.875096   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:32.897149   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:32.916105   72712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:33.071143   72712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:33.231529   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:33.252919   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:33.277388   72712 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0425 20:03:33.277457   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.290889   72712 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:33.290953   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.305488   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.319263   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.332961   72712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:33.354086   72712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:33.373431   72712 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:33.373517   72712 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:33.398458   72712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:33.418683   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:33.595555   72712 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:33.808015   72712 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:33.810391   72712 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:33.817593   72712 start.go:562] Will wait 60s for crictl version
	I0425 20:03:33.817646   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:33.823381   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:33.866310   72712 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:33.866411   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.905561   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.952764   72712 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0425 20:03:32.214679   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Start
	I0425 20:03:32.214880   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring networks are active...
	I0425 20:03:32.215746   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network default is active
	I0425 20:03:32.216106   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network mk-embed-certs-512173 is active
	I0425 20:03:32.216566   71966 main.go:141] libmachine: (embed-certs-512173) Getting domain xml...
	I0425 20:03:32.217397   71966 main.go:141] libmachine: (embed-certs-512173) Creating domain...
	I0425 20:03:33.554665   71966 main.go:141] libmachine: (embed-certs-512173) Waiting to get IP...
	I0425 20:03:33.555670   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.556123   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.556186   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.556089   73884 retry.go:31] will retry after 278.996701ms: waiting for machine to come up
	I0425 20:03:33.836750   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.837273   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.837301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.837244   73884 retry.go:31] will retry after 324.410317ms: waiting for machine to come up
	I0425 20:03:34.163017   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.163490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.163518   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.163457   73884 retry.go:31] will retry after 403.985826ms: waiting for machine to come up
	I0425 20:03:34.568824   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.569364   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.569397   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.569330   73884 retry.go:31] will retry after 427.12179ms: waiting for machine to come up
	I0425 20:03:34.998092   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.998684   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.998709   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.998646   73884 retry.go:31] will retry after 710.71475ms: waiting for machine to come up
	I0425 20:03:35.710643   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:35.711707   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:35.711736   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:35.711616   73884 retry.go:31] will retry after 806.283051ms: waiting for machine to come up
	I0425 20:03:31.803034   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:33.813548   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:35.304283   72304 node_ready.go:49] node "default-k8s-diff-port-142196" has status "Ready":"True"
	I0425 20:03:35.304311   72304 node_ready.go:38] duration metric: took 5.505123781s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:35.304323   72304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:35.311480   72304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320910   72304 pod_ready.go:92] pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:35.320938   72304 pod_ready.go:81] duration metric: took 9.425507ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320953   72304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:33.954161   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:33.957316   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.957778   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:33.957811   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.958080   72712 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:33.964467   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:33.984277   72712 kubeadm.go:877] updating cluster {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:33.984437   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 20:03:33.984499   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:34.049402   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:34.049479   72712 ssh_runner.go:195] Run: which lz4
	I0425 20:03:34.055519   72712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:34.061481   72712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:34.061522   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0425 20:03:36.271646   72712 crio.go:462] duration metric: took 2.216165414s to copy over tarball
	I0425 20:03:36.271722   72712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:34.877483   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:37.373822   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:36.519514   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:36.520052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:36.520085   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:36.519968   73884 retry.go:31] will retry after 990.986618ms: waiting for machine to come up
	I0425 20:03:37.513151   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:37.513636   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:37.513669   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:37.513574   73884 retry.go:31] will retry after 1.371471682s: waiting for machine to come up
	I0425 20:03:38.886926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:38.887491   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:38.887527   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:38.887415   73884 retry.go:31] will retry after 1.633505345s: waiting for machine to come up
	I0425 20:03:40.523438   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:40.523975   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:40.524004   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:40.523926   73884 retry.go:31] will retry after 2.280577933s: waiting for machine to come up
	I0425 20:03:37.330040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.350040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.894331   72712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.622580176s)
	I0425 20:03:39.894364   72712 crio.go:469] duration metric: took 3.62268463s to extract the tarball
	I0425 20:03:39.894373   72712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:39.965071   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:40.009534   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:40.009561   72712 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:03:40.009629   72712 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.009651   72712 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.009677   72712 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.009662   72712 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.009794   72712 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.009920   72712 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.010033   72712 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.010241   72712 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0425 20:03:40.011305   72712 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.011334   72712 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.011346   72712 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.011686   72712 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.012422   72712 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.012429   72712 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.012437   72712 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0425 20:03:40.012546   72712 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.143545   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.155203   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.157842   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.158081   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.161210   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.166515   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.181859   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0425 20:03:40.301699   72712 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0425 20:03:40.301759   72712 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.301805   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.379386   72712 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0425 20:03:40.379445   72712 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.379490   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406119   72712 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0425 20:03:40.406231   72712 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.406174   72712 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0425 20:03:40.406338   72712 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.406365   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406389   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420450   72712 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0425 20:03:40.420495   72712 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.420548   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420461   72712 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0425 20:03:40.420629   72712 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.420677   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430055   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.430110   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.430232   72712 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0425 20:03:40.430263   72712 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0425 20:03:40.430274   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.430277   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.430303   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430326   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.430389   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.582980   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0425 20:03:40.583094   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0425 20:03:40.587500   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0425 20:03:40.587564   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0425 20:03:40.587579   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0425 20:03:40.587650   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0425 20:03:40.587697   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0425 20:03:40.625942   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0425 20:03:40.941957   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:41.096086   72712 cache_images.go:92] duration metric: took 1.086507707s to LoadCachedImages
	W0425 20:03:41.096249   72712 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0425 20:03:41.096279   72712 kubeadm.go:928] updating node { 192.168.61.136 8443 v1.20.0 crio true true} ...
	I0425 20:03:41.096415   72712 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210442 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:41.096509   72712 ssh_runner.go:195] Run: crio config
	I0425 20:03:41.169311   72712 cni.go:84] Creating CNI manager for ""
	I0425 20:03:41.169341   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:41.169357   72712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:41.169397   72712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210442 NodeName:old-k8s-version-210442 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0425 20:03:41.169570   72712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210442"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:41.169639   72712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0425 20:03:41.182191   72712 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:41.182283   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:41.193546   72712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0425 20:03:41.218220   72712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:41.238647   72712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0425 20:03:41.259040   72712 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:41.263603   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:41.278007   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:41.425587   72712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:41.450990   72712 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442 for IP: 192.168.61.136
	I0425 20:03:41.451013   72712 certs.go:194] generating shared ca certs ...
	I0425 20:03:41.451034   72712 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:41.451225   72712 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:41.451307   72712 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:41.451323   72712 certs.go:256] generating profile certs ...
	I0425 20:03:41.451449   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.key
	I0425 20:03:41.451528   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key.1533c9ac
	I0425 20:03:41.451587   72712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key
	I0425 20:03:41.451789   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:41.451860   72712 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:41.451880   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:41.451915   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:41.451945   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:41.451968   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:41.452023   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:41.452870   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:41.510467   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:41.555595   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:41.606059   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:41.648206   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0425 20:03:41.690090   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:41.727674   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:41.766537   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:41.799524   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:41.828668   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:41.860964   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:41.890272   72712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:41.911787   72712 ssh_runner.go:195] Run: openssl version
	I0425 20:03:41.918926   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:41.933194   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.938995   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.939060   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.945934   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:41.959859   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:41.974906   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.980931   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.981006   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.987789   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:42.002455   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:42.016797   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023789   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023853   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.033189   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:42.047467   72712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:42.053552   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:42.063130   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:42.070290   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:42.079527   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:42.087983   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:42.096658   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:42.103477   72712 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:42.103596   72712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:42.103649   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.155980   72712 cri.go:89] found id: ""
	I0425 20:03:42.156085   72712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:42.172499   72712 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:42.172525   72712 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:42.172532   72712 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:42.172580   72712 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:42.187864   72712 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:42.188948   72712 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210442" does not appear in /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:42.189659   72712 kubeconfig.go:62] /home/jenkins/minikube-integration/18757-6355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210442" cluster setting kubeconfig missing "old-k8s-version-210442" context setting]
	I0425 20:03:42.190635   72712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:42.192402   72712 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:42.207284   72712 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.136
	I0425 20:03:42.207318   72712 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:42.207329   72712 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:42.207403   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.251184   72712 cri.go:89] found id: ""
	I0425 20:03:42.251257   72712 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:42.271727   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:42.289161   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:42.289184   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:42.289237   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:42.302492   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:42.302588   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:42.317790   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:42.329940   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:42.330002   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:42.342772   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:39.375028   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:41.871821   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.805640   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:42.806121   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:42.806148   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:42.806072   73884 retry.go:31] will retry after 2.588054599s: waiting for machine to come up
	I0425 20:03:45.395282   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:45.395712   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:45.395759   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:45.395662   73884 retry.go:31] will retry after 3.473643777s: waiting for machine to come up
	I0425 20:03:41.329479   72304 pod_ready.go:92] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.329511   72304 pod_ready.go:81] duration metric: took 6.008549199s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.329523   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335660   72304 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.335688   72304 pod_ready.go:81] duration metric: took 6.15557ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335700   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341409   72304 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.341433   72304 pod_ready.go:81] duration metric: took 5.723469ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341446   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347145   72304 pod_ready.go:92] pod "kube-proxy-bqmtp" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.347167   72304 pod_ready.go:81] duration metric: took 5.713095ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347179   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376913   72304 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.376939   72304 pod_ready.go:81] duration metric: took 29.751827ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376951   72304 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:43.383378   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:45.884869   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.356480   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:42.357280   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:42.370403   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:42.384245   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:42.384332   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:42.398271   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:42.412361   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:42.575076   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.186458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.480114   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.594128   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.707129   72712 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:43.707221   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.207406   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.707733   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.208100   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.708041   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.207966   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.707255   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:47.207754   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:43.873747   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:46.374439   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:48.871928   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:48.872457   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:48.872490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:48.872393   73884 retry.go:31] will retry after 4.148424216s: waiting for machine to come up
	I0425 20:03:48.384599   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.883246   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:47.707730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.208213   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.707685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.207879   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.707914   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.208278   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.707691   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.207600   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.707365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:52.207931   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.872282   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.872356   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.874452   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:53.022813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023343   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has current primary IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023367   71966 main.go:141] libmachine: (embed-certs-512173) Found IP for machine: 192.168.50.7
	I0425 20:03:53.023381   71966 main.go:141] libmachine: (embed-certs-512173) Reserving static IP address...
	I0425 20:03:53.023750   71966 main.go:141] libmachine: (embed-certs-512173) Reserved static IP address: 192.168.50.7
	I0425 20:03:53.023770   71966 main.go:141] libmachine: (embed-certs-512173) Waiting for SSH to be available...
	I0425 20:03:53.023791   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.023827   71966 main.go:141] libmachine: (embed-certs-512173) DBG | skip adding static IP to network mk-embed-certs-512173 - found existing host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"}
	I0425 20:03:53.023848   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Getting to WaitForSSH function...
	I0425 20:03:53.025753   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.026132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026244   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH client type: external
	I0425 20:03:53.026268   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa (-rw-------)
	I0425 20:03:53.026301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:53.026313   71966 main.go:141] libmachine: (embed-certs-512173) DBG | About to run SSH command:
	I0425 20:03:53.026325   71966 main.go:141] libmachine: (embed-certs-512173) DBG | exit 0
	I0425 20:03:53.158487   71966 main.go:141] libmachine: (embed-certs-512173) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:53.158846   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetConfigRaw
	I0425 20:03:53.159567   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.161881   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162200   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.162257   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162492   71966 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/config.json ...
	I0425 20:03:53.162658   71966 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:53.162675   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:53.162875   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.164797   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.165140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165256   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.165402   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165561   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165659   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.165815   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.165989   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.166002   71966 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:53.283185   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:53.283219   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283455   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:03:53.283480   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283690   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.286427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.286843   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286969   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.287164   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287350   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.287641   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.287881   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.287904   71966 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-512173 && echo "embed-certs-512173" | sudo tee /etc/hostname
	I0425 20:03:53.423037   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-512173
	
	I0425 20:03:53.423067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.425749   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.426140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426329   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.426501   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426640   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426747   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.426866   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.427015   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.427083   71966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-512173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-512173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-512173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:53.553687   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:53.553715   71966 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:53.553749   71966 buildroot.go:174] setting up certificates
	I0425 20:03:53.553758   71966 provision.go:84] configureAuth start
	I0425 20:03:53.553775   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.554053   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.556655   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.556995   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.557034   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.557121   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.559341   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559692   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.559718   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559897   71966 provision.go:143] copyHostCerts
	I0425 20:03:53.559970   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:53.559984   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:53.560049   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:53.560129   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:53.560136   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:53.560155   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:53.560203   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:53.560214   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:53.560233   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:53.560278   71966 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-512173 san=[127.0.0.1 192.168.50.7 embed-certs-512173 localhost minikube]
	I0425 20:03:53.621714   71966 provision.go:177] copyRemoteCerts
	I0425 20:03:53.621777   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:53.621804   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.624556   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.624883   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.624914   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.625128   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.625324   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.625458   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.625602   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:53.715477   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:03:53.743782   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:53.771468   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:53.798701   71966 provision.go:87] duration metric: took 244.92871ms to configureAuth
	I0425 20:03:53.798726   71966 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:53.798922   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:53.798991   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.801607   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.801946   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.801972   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.802187   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.802373   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802628   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.802833   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.802986   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.803000   71966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:54.117164   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:54.117193   71966 machine.go:97] duration metric: took 954.522384ms to provisionDockerMachine
	I0425 20:03:54.117207   71966 start.go:293] postStartSetup for "embed-certs-512173" (driver="kvm2")
	I0425 20:03:54.117219   71966 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:54.117238   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.117558   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:54.117591   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.120060   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.120454   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120575   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.120761   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.120891   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.121002   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.209919   71966 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:54.215633   71966 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:54.215663   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:54.215747   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:54.215860   71966 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:54.215996   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:54.227250   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:54.257169   71966 start.go:296] duration metric: took 139.949813ms for postStartSetup
	I0425 20:03:54.257212   71966 fix.go:56] duration metric: took 22.069363419s for fixHost
	I0425 20:03:54.257237   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.260255   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260588   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.260613   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260731   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.260928   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261099   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261266   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.261447   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:54.261644   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:54.261655   71966 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:54.376222   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075434.352338373
	
	I0425 20:03:54.376245   71966 fix.go:216] guest clock: 1714075434.352338373
	I0425 20:03:54.376255   71966 fix.go:229] Guest: 2024-04-25 20:03:54.352338373 +0000 UTC Remote: 2024-04-25 20:03:54.257217658 +0000 UTC m=+368.446046405 (delta=95.120715ms)
	I0425 20:03:54.376287   71966 fix.go:200] guest clock delta is within tolerance: 95.120715ms
	I0425 20:03:54.376295   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 22.188484297s
	I0425 20:03:54.376317   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.376600   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:54.379217   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379646   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.379678   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379869   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380436   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380633   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380729   71966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:54.380779   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.380857   71966 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:54.380880   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.383698   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384081   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384283   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384471   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.384610   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.384647   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384683   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384781   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.384821   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384982   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.385131   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.385330   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.468506   71966 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:54.493995   71966 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:54.642719   71966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:54.649565   71966 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:54.649632   71966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:54.667526   71966 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:54.667546   71966 start.go:494] detecting cgroup driver to use...
	I0425 20:03:54.667596   71966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:54.685384   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:54.701852   71966 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:54.701905   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:54.718559   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:54.734874   71966 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:54.858325   71966 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:55.045158   71966 docker.go:233] disabling docker service ...
	I0425 20:03:55.045219   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:55.061668   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:55.076486   71966 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:55.207287   71966 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:55.352537   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:55.369470   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:55.392638   71966 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:55.392718   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.404590   71966 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:55.404655   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.416129   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.427176   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.438632   71966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:55.450725   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.462912   71966 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.485340   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.498134   71966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:55.508378   71966 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:55.508451   71966 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:55.523073   71966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:55.533901   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:55.666845   71966 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:55.828131   71966 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:55.828199   71966 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:55.833768   71966 start.go:562] Will wait 60s for crictl version
	I0425 20:03:55.833824   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:03:55.838000   71966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:55.881652   71966 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:55.881753   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.917675   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.953046   71966 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:52.884447   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:54.884538   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.707459   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.208241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.707431   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.207538   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.707289   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.207319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.707625   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.207562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.708324   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:57.207348   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.373713   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.374476   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:55.954484   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:55.957214   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957611   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:55.957638   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957832   71966 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:55.962420   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:55.976512   71966 kubeadm.go:877] updating cluster {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:55.976626   71966 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:55.976694   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:56.019881   71966 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:56.019942   71966 ssh_runner.go:195] Run: which lz4
	I0425 20:03:56.024524   71966 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:56.029297   71966 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:56.029339   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:57.736602   71966 crio.go:462] duration metric: took 1.712117844s to copy over tarball
	I0425 20:03:57.736666   71966 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:04:00.331696   71966 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.594977915s)
	I0425 20:04:00.331739   71966 crio.go:469] duration metric: took 2.595109768s to extract the tarball
	I0425 20:04:00.331751   71966 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:04:00.375437   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:04:00.430963   71966 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:04:00.430987   71966 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:04:00.430994   71966 kubeadm.go:928] updating node { 192.168.50.7 8443 v1.30.0 crio true true} ...
	I0425 20:04:00.431081   71966 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-512173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:04:00.431154   71966 ssh_runner.go:195] Run: crio config
	I0425 20:04:00.487082   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:00.487106   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:00.487117   71966 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:04:00.487135   71966 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-512173 NodeName:embed-certs-512173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:04:00.487306   71966 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-512173"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:04:00.487378   71966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:04:00.498819   71966 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:04:00.498881   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:04:00.509212   71966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0425 20:04:00.527703   71966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:04:00.546867   71966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0425 20:04:00.566302   71966 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0425 20:04:00.570629   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:04:00.584123   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:00.717589   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:00.743108   71966 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173 for IP: 192.168.50.7
	I0425 20:04:00.743173   71966 certs.go:194] generating shared ca certs ...
	I0425 20:04:00.743201   71966 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:00.743397   71966 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:04:00.743462   71966 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:04:00.743480   71966 certs.go:256] generating profile certs ...
	I0425 20:04:00.743644   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/client.key
	I0425 20:04:00.743729   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key.4a0c231f
	I0425 20:04:00.743789   71966 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key
	I0425 20:04:00.743964   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:04:00.744019   71966 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:04:00.744033   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:04:00.744064   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:04:00.744093   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:04:00.744117   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:04:00.744158   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:04:00.745130   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:04:00.797856   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:04:00.848631   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:56.885355   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:58.885857   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.707868   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.208319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.207410   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.707562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.208006   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.708245   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.208178   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.707239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:02.207926   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.873851   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.372919   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:00.877499   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:04:01.210716   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0425 20:04:01.239562   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:04:01.267356   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:04:01.295649   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:04:01.323739   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:04:01.350440   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:04:01.379693   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:04:01.409347   71966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:04:01.429857   71966 ssh_runner.go:195] Run: openssl version
	I0425 20:04:01.437636   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:04:01.449656   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455022   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455074   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.461442   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:04:01.473323   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:04:01.485988   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491661   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491719   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.498567   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:04:01.510983   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:04:01.523098   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528619   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528667   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.535129   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:04:01.546668   71966 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:04:01.552076   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:04:01.558928   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:04:01.566406   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:04:01.574761   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:04:01.581250   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:04:01.588506   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:04:01.594844   71966 kubeadm.go:391] StartCluster: {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:04:01.594917   71966 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:04:01.594978   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.648050   71966 cri.go:89] found id: ""
	I0425 20:04:01.648155   71966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:04:01.664291   71966 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:04:01.664318   71966 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:04:01.664325   71966 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:04:01.664387   71966 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:04:01.678686   71966 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:04:01.680096   71966 kubeconfig.go:125] found "embed-certs-512173" server: "https://192.168.50.7:8443"
	I0425 20:04:01.682375   71966 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:04:01.699073   71966 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0425 20:04:01.699109   71966 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:04:01.699122   71966 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:04:01.699190   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.744556   71966 cri.go:89] found id: ""
	I0425 20:04:01.744633   71966 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:04:01.767121   71966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:04:01.778499   71966 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:04:01.778517   71966 kubeadm.go:156] found existing configuration files:
	
	I0425 20:04:01.778575   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:04:01.789171   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:04:01.789242   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:04:01.800000   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:04:01.811015   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:04:01.811078   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:04:01.821752   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.832900   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:04:01.832962   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.844058   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:04:01.854774   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:04:01.854824   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:04:01.866086   71966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:04:01.879229   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.180778   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.971467   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.202841   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.286951   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.412260   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:04:03.412375   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.913176   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.413418   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.443763   71966 api_server.go:72] duration metric: took 1.031501246s to wait for apiserver process to appear ...
	I0425 20:04:04.443796   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:04:04.443816   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:04.444334   71966 api_server.go:269] stopped: https://192.168.50.7:8443/healthz: Get "https://192.168.50.7:8443/healthz": dial tcp 192.168.50.7:8443: connect: connection refused
	I0425 20:04:04.943937   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:01.384590   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:03.885859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.707796   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.207913   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.708267   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.207491   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.707894   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.207346   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.707801   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.208283   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.707342   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:07.208190   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.381611   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:06.875270   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.463721   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.463767   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.463785   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.479254   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.479283   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.944812   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.949683   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:07.949710   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.444237   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.451663   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.451706   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.944231   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.949165   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.949194   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.444776   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.449703   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.449732   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.943865   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.948474   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.948509   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.444040   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.448740   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:10.448781   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.944487   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.950181   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:04:10.957455   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:04:10.957479   71966 api_server.go:131] duration metric: took 6.513676295s to wait for apiserver health ...
	I0425 20:04:10.957487   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:10.957496   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:10.959196   71966 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:04:06.384595   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:08.883972   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.707466   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.207370   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.707951   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.207604   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.708057   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.207422   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.707391   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.207510   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.707828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:12.207519   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.960795   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:04:10.977005   71966 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:04:11.001393   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:04:11.021408   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:04:11.021439   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:04:11.021453   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:04:11.021466   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:04:11.021478   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:04:11.021495   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:04:11.021502   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:04:11.021513   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:04:11.021521   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:04:11.021533   71966 system_pods.go:74] duration metric: took 20.120592ms to wait for pod list to return data ...
	I0425 20:04:11.021540   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:04:11.025328   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:04:11.025360   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:04:11.025374   71966 node_conditions.go:105] duration metric: took 3.826846ms to run NodePressure ...
	I0425 20:04:11.025394   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:11.304673   71966 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309061   71966 kubeadm.go:733] kubelet initialised
	I0425 20:04:11.309082   71966 kubeadm.go:734] duration metric: took 4.385794ms waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309089   71966 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:11.314583   71966 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.319490   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319515   71966 pod_ready.go:81] duration metric: took 4.900118ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.319524   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319534   71966 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.324084   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324101   71966 pod_ready.go:81] duration metric: took 4.557199ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.324108   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324113   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.328151   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328167   71966 pod_ready.go:81] duration metric: took 4.047894ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.328174   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328184   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.404944   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.404982   71966 pod_ready.go:81] duration metric: took 76.789573ms for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.404997   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.405006   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.805191   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805221   71966 pod_ready.go:81] duration metric: took 400.202708ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.805238   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805248   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.205817   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205847   71966 pod_ready.go:81] duration metric: took 400.591033ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.205858   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205866   71966 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.605705   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605736   71966 pod_ready.go:81] duration metric: took 399.849241ms for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.605745   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605754   71966 pod_ready.go:38] duration metric: took 1.29665644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:12.605776   71966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:04:12.620368   71966 ops.go:34] apiserver oom_adj: -16
	I0425 20:04:12.620397   71966 kubeadm.go:591] duration metric: took 10.956065292s to restartPrimaryControlPlane
	I0425 20:04:12.620405   71966 kubeadm.go:393] duration metric: took 11.025567867s to StartCluster
	I0425 20:04:12.620419   71966 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.620492   71966 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:04:12.623272   71966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.623577   71966 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:04:12.625335   71966 out.go:177] * Verifying Kubernetes components...
	I0425 20:04:12.623608   71966 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:04:12.623775   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:04:12.626619   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:12.626625   71966 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-512173"
	I0425 20:04:12.626642   71966 addons.go:69] Setting metrics-server=true in profile "embed-certs-512173"
	I0425 20:04:12.626664   71966 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-512173"
	W0425 20:04:12.626674   71966 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:04:12.626681   71966 addons.go:234] Setting addon metrics-server=true in "embed-certs-512173"
	W0425 20:04:12.626690   71966 addons.go:243] addon metrics-server should already be in state true
	I0425 20:04:12.626623   71966 addons.go:69] Setting default-storageclass=true in profile "embed-certs-512173"
	I0425 20:04:12.626709   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626714   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626718   71966 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-512173"
	I0425 20:04:12.626985   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627013   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627020   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627035   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627088   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627130   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.642680   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0425 20:04:12.642798   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0425 20:04:12.642972   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0425 20:04:12.643182   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643288   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643418   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643671   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643696   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643871   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643884   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643893   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643915   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.644227   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644235   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644403   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.644431   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644819   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.644942   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.644980   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.645022   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.647992   71966 addons.go:234] Setting addon default-storageclass=true in "embed-certs-512173"
	W0425 20:04:12.648011   71966 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:04:12.648045   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.648393   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.648429   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.660989   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41421
	I0425 20:04:12.661534   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.662561   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.662592   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.662614   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0425 20:04:12.662804   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0425 20:04:12.662947   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663016   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663116   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.663173   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663515   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663547   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663585   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663604   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663882   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663920   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.664096   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.664487   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.664506   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.665031   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.667087   71966 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:04:12.668326   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:04:12.668343   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:04:12.668361   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.666460   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.669907   71966 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:04:09.373628   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:11.376301   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.671391   71966 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.671411   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:04:12.671427   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.671566   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672113   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.672132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672233   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.672353   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.672439   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.672525   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.674511   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.674926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.674951   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.675178   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.675357   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.675505   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.675662   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.683720   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0425 20:04:12.684195   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.684736   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.684755   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.685100   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.685282   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.687009   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.687257   71966 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.687277   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:04:12.687325   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.689958   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690356   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.690374   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690446   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.690655   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.690841   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.690989   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.846840   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:12.865045   71966 node_ready.go:35] waiting up to 6m0s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:12.938848   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:04:12.938875   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:04:12.941038   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.959316   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.977813   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:04:12.977841   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:04:13.050586   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:13.050610   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:04:13.111207   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:14.253195   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.31212607s)
	I0425 20:04:14.253252   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253247   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.293897647s)
	I0425 20:04:14.253268   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253303   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253371   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253625   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253641   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253650   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253656   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253677   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253690   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253699   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253711   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253876   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254099   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253911   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253949   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253977   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254193   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.260565   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.260584   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.260830   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.260850   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.342979   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.231720554s)
	I0425 20:04:14.343042   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343349   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.343358   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343374   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343390   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343398   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343602   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343623   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343633   71966 addons.go:470] Verifying addon metrics-server=true in "embed-certs-512173"
	I0425 20:04:14.346631   71966 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:04:14.347936   71966 addons.go:505] duration metric: took 1.724328435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:04:14.869074   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.383960   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:13.384840   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.883656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.707816   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.207561   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.708264   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.207822   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.707509   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.207507   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.707899   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.208254   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.708246   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:17.207508   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.873212   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.873263   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:18.373183   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:16.870001   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:18.368960   71966 node_ready.go:49] node "embed-certs-512173" has status "Ready":"True"
	I0425 20:04:18.368991   71966 node_ready.go:38] duration metric: took 5.503919958s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:18.369003   71966 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:18.375440   71966 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380902   71966 pod_ready.go:92] pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.380920   71966 pod_ready.go:81] duration metric: took 5.456921ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380928   71966 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386330   71966 pod_ready.go:92] pod "etcd-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.386386   71966 pod_ready.go:81] duration metric: took 5.451019ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386402   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391115   71966 pod_ready.go:92] pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.391138   71966 pod_ready.go:81] duration metric: took 4.727835ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391149   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:20.398316   71966 pod_ready.go:102] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.885191   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:20.384439   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.707948   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.207953   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.707659   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.207609   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.707567   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.207989   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.707938   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.208305   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.707827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:22.207940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.374376   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.873180   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.899221   71966 pod_ready.go:92] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.899240   71966 pod_ready.go:81] duration metric: took 4.508083804s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.899250   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904904   71966 pod_ready.go:92] pod "kube-proxy-8247p" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.904922   71966 pod_ready.go:81] duration metric: took 5.665557ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904929   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910035   71966 pod_ready.go:92] pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.910051   71966 pod_ready.go:81] duration metric: took 5.116298ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910059   71966 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:24.919233   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.884480   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:25.384287   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.707381   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.207532   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.707461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.208239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.707742   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.208365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.707323   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.207485   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.707727   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:27.208332   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.373538   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.872428   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.420297   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.918808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.385722   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.883321   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.707275   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.207776   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.708096   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.207685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.708249   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.207647   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.707943   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.207471   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.707902   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:32.207582   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.872576   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.372818   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.416593   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:34.416976   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:31.884120   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:33.885341   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:35.886190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.708066   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.208090   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.707474   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.207664   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.708110   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.208160   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.707940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.207505   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.708334   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:37.207939   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.375813   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.873166   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.417945   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.916796   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.384530   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.384673   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:37.707256   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.207621   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.708237   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.208327   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.707542   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.207371   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.708300   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.207577   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.708097   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:42.207684   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.876272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:41.372217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.918223   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:43.420086   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.389390   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:44.885243   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.708257   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.207407   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.707548   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:43.707618   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:43.753656   72712 cri.go:89] found id: ""
	I0425 20:04:43.753686   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.753698   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:43.753706   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:43.753770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:43.797957   72712 cri.go:89] found id: ""
	I0425 20:04:43.797982   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.797991   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:43.797996   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:43.798051   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:43.836700   72712 cri.go:89] found id: ""
	I0425 20:04:43.836729   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.836737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:43.836742   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:43.836795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:43.883452   72712 cri.go:89] found id: ""
	I0425 20:04:43.883478   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.883486   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:43.883492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:43.883544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:43.929975   72712 cri.go:89] found id: ""
	I0425 20:04:43.930004   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.930014   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:43.930022   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:43.930089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:43.967648   72712 cri.go:89] found id: ""
	I0425 20:04:43.967681   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.967693   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:43.967701   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:43.967758   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:44.011024   72712 cri.go:89] found id: ""
	I0425 20:04:44.011048   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.011072   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:44.011078   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:44.011129   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:44.050233   72712 cri.go:89] found id: ""
	I0425 20:04:44.050263   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.050274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:44.050286   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:44.050302   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:44.196275   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:44.196307   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:44.196323   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:44.260707   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:44.260748   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:44.306051   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:44.306090   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:44.357643   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:44.357682   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:46.875982   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:46.890987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:46.891062   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:46.935855   72712 cri.go:89] found id: ""
	I0425 20:04:46.935878   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.935885   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:46.935891   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:46.935948   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:46.978634   72712 cri.go:89] found id: ""
	I0425 20:04:46.978662   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.978674   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:46.978681   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:46.978749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:47.019845   72712 cri.go:89] found id: ""
	I0425 20:04:47.019864   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.019872   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:47.019877   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:47.019933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:47.065002   72712 cri.go:89] found id: ""
	I0425 20:04:47.065040   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.065064   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:47.065072   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:47.065139   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:47.106370   72712 cri.go:89] found id: ""
	I0425 20:04:47.106404   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.106416   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:47.106423   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:47.106483   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:47.143851   72712 cri.go:89] found id: ""
	I0425 20:04:47.143874   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.143883   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:47.143888   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:47.143932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:47.186130   72712 cri.go:89] found id: ""
	I0425 20:04:47.186160   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.186168   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:47.186174   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:47.186238   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:47.228959   72712 cri.go:89] found id: ""
	I0425 20:04:47.228984   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.228992   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:47.229000   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:47.229010   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:47.299852   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:47.299893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:47.346078   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:47.346111   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:43.872670   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:46.373259   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:45.917948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.919494   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:50.420952   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.388353   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:49.884300   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.405897   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:47.405932   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:47.424426   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:47.424455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:47.506603   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.007697   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:50.023258   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:50.023333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:50.066794   72712 cri.go:89] found id: ""
	I0425 20:04:50.066827   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.066836   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:50.066842   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:50.066913   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:50.109167   72712 cri.go:89] found id: ""
	I0425 20:04:50.109200   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.109212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:50.109219   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:50.109306   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:50.151854   72712 cri.go:89] found id: ""
	I0425 20:04:50.151878   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.151886   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:50.151892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:50.151940   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:50.190600   72712 cri.go:89] found id: ""
	I0425 20:04:50.190632   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.190644   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:50.190672   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:50.190742   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:50.232851   72712 cri.go:89] found id: ""
	I0425 20:04:50.232874   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.232883   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:50.232889   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:50.232935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:50.274941   72712 cri.go:89] found id: ""
	I0425 20:04:50.274971   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.274983   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:50.274990   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:50.275069   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:50.320954   72712 cri.go:89] found id: ""
	I0425 20:04:50.320981   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.320992   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:50.320999   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:50.321068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:50.361799   72712 cri.go:89] found id: ""
	I0425 20:04:50.361829   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.361839   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:50.361847   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:50.361858   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:50.457792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.457819   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:50.457834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:50.539653   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:50.539702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:50.598740   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:50.598774   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:50.650501   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:50.650533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:48.872490   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.374484   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:52.919420   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:55.420126   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.887536   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:54.389174   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:53.167827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:53.183324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:53.183403   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:53.227598   72712 cri.go:89] found id: ""
	I0425 20:04:53.227641   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.227650   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:53.227655   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:53.227700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:53.271170   72712 cri.go:89] found id: ""
	I0425 20:04:53.271200   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.271212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:53.271220   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:53.271304   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:53.318185   72712 cri.go:89] found id: ""
	I0425 20:04:53.318233   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.318246   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:53.318255   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:53.318324   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:53.372199   72712 cri.go:89] found id: ""
	I0425 20:04:53.372228   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.372238   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:53.372244   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:53.372367   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:53.414048   72712 cri.go:89] found id: ""
	I0425 20:04:53.414080   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.414091   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:53.414099   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:53.414170   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:53.455746   72712 cri.go:89] found id: ""
	I0425 20:04:53.455806   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.455819   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:53.455827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:53.455901   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:53.497969   72712 cri.go:89] found id: ""
	I0425 20:04:53.497996   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.498004   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:53.498011   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:53.498057   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:53.543642   72712 cri.go:89] found id: ""
	I0425 20:04:53.543668   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.543675   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:53.543684   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:53.543693   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:53.596106   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:53.596144   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:53.612755   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:53.612787   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:53.693068   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:53.693089   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:53.693102   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:53.771499   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:53.771535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:56.322663   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:56.336866   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:56.336945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:56.375515   72712 cri.go:89] found id: ""
	I0425 20:04:56.375556   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.375567   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:56.375574   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:56.375641   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:56.423230   72712 cri.go:89] found id: ""
	I0425 20:04:56.423261   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.423273   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:56.423281   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:56.423366   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:56.467786   72712 cri.go:89] found id: ""
	I0425 20:04:56.467814   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.467835   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:56.467842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:56.467895   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:56.517671   72712 cri.go:89] found id: ""
	I0425 20:04:56.517696   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.517708   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:56.517715   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:56.517770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:56.558622   72712 cri.go:89] found id: ""
	I0425 20:04:56.558651   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.558662   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:56.558669   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:56.558746   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:56.601350   72712 cri.go:89] found id: ""
	I0425 20:04:56.601374   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.601382   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:56.601387   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:56.601444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:56.645892   72712 cri.go:89] found id: ""
	I0425 20:04:56.645923   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.645934   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:56.645940   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:56.646001   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:56.691619   72712 cri.go:89] found id: ""
	I0425 20:04:56.691645   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.691656   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:56.691665   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:56.691679   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:56.744854   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:56.744891   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:56.762523   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:56.762556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:56.843396   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:56.843422   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:56.843437   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:56.933785   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:56.933825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:53.872514   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.372956   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:58.373649   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:57.917208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.920979   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.884907   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.385506   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.481512   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:59.497510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:59.497588   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:59.547382   72712 cri.go:89] found id: ""
	I0425 20:04:59.547412   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.547423   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:59.547432   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:59.547486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:59.597671   72712 cri.go:89] found id: ""
	I0425 20:04:59.597699   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.597711   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:59.597717   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:59.597762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:59.641455   72712 cri.go:89] found id: ""
	I0425 20:04:59.641486   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.641497   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:59.641503   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:59.641613   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:59.685052   72712 cri.go:89] found id: ""
	I0425 20:04:59.685092   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.685104   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:59.685112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:59.685173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:59.735912   72712 cri.go:89] found id: ""
	I0425 20:04:59.735943   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.735951   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:59.735957   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:59.736025   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:59.799294   72712 cri.go:89] found id: ""
	I0425 20:04:59.799322   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.799332   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:59.799338   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:59.799395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:59.871270   72712 cri.go:89] found id: ""
	I0425 20:04:59.871297   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.871308   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:59.871315   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:59.871377   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:59.919001   72712 cri.go:89] found id: ""
	I0425 20:04:59.919091   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.919110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:59.919120   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:59.919135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:59.973458   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:59.973498   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:59.989729   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:59.989757   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:00.072887   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:00.072911   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:00.072926   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:00.153886   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:00.153921   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:00.873812   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.372969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.417960   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:04.420353   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:01.885238   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.887277   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:02.722771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:02.722831   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:02.770101   72712 cri.go:89] found id: ""
	I0425 20:05:02.770134   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.770147   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:02.770154   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:02.770224   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:02.817819   72712 cri.go:89] found id: ""
	I0425 20:05:02.817854   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.817865   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:02.817898   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:02.817963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:02.857036   72712 cri.go:89] found id: ""
	I0425 20:05:02.857066   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.857077   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:02.857085   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:02.857144   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:02.900112   72712 cri.go:89] found id: ""
	I0425 20:05:02.900145   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.900157   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:02.900164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:02.900221   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:02.941079   72712 cri.go:89] found id: ""
	I0425 20:05:02.941109   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.941116   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:02.941121   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:02.941198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:02.983458   72712 cri.go:89] found id: ""
	I0425 20:05:02.983490   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.983502   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:02.983510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:02.983574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:03.025424   72712 cri.go:89] found id: ""
	I0425 20:05:03.025451   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.025462   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:03.025469   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:03.025556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:03.065285   72712 cri.go:89] found id: ""
	I0425 20:05:03.065316   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.065328   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:03.065340   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:03.065351   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:03.121235   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:03.121267   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:03.138036   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:03.138073   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:03.213604   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:03.213638   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:03.213655   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:03.296696   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:03.296741   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.842642   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:05.859125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:05.859199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:05.906505   72712 cri.go:89] found id: ""
	I0425 20:05:05.906529   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.906537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:05.906542   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:05.906595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:05.950793   72712 cri.go:89] found id: ""
	I0425 20:05:05.950819   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.950831   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:05.950838   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:05.950902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:05.991612   72712 cri.go:89] found id: ""
	I0425 20:05:05.991644   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.991654   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:05.991661   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:05.991755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:06.032273   72712 cri.go:89] found id: ""
	I0425 20:05:06.032314   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.032326   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:06.032334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:06.032392   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:06.071802   72712 cri.go:89] found id: ""
	I0425 20:05:06.071833   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.071844   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:06.071852   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:06.071908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:06.116676   72712 cri.go:89] found id: ""
	I0425 20:05:06.116702   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.116710   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:06.116716   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:06.116759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:06.154720   72712 cri.go:89] found id: ""
	I0425 20:05:06.154753   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.154765   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:06.154771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:06.154842   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:06.196421   72712 cri.go:89] found id: ""
	I0425 20:05:06.196457   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.196469   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:06.196480   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:06.196493   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:06.251061   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:06.251122   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:06.267764   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:06.267799   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:06.345302   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:06.345334   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:06.345349   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:06.427836   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:06.427868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.873928   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.372014   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.422386   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.916659   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.384700   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.883611   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:10.885814   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.989442   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:09.004493   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:09.004551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:09.056062   72712 cri.go:89] found id: ""
	I0425 20:05:09.056086   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.056096   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:09.056101   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:09.056148   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:09.096791   72712 cri.go:89] found id: ""
	I0425 20:05:09.096817   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.096827   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:09.096834   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:09.096889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:09.134649   72712 cri.go:89] found id: ""
	I0425 20:05:09.134680   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.134691   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:09.134698   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:09.134757   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:09.175980   72712 cri.go:89] found id: ""
	I0425 20:05:09.176010   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.176021   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:09.176028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:09.176084   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:09.216263   72712 cri.go:89] found id: ""
	I0425 20:05:09.216299   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.216313   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:09.216325   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:09.216395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:09.260498   72712 cri.go:89] found id: ""
	I0425 20:05:09.260528   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.260538   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:09.260544   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:09.260603   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:09.303154   72712 cri.go:89] found id: ""
	I0425 20:05:09.303178   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.303201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:09.303209   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:09.303269   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:09.350798   72712 cri.go:89] found id: ""
	I0425 20:05:09.350829   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.350840   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:09.350852   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:09.350868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:09.405295   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:09.405332   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:09.422788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:09.422820   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:09.501819   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:09.501841   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:09.501855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:09.586938   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:09.586981   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:12.132731   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:12.148860   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:12.148935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:12.194021   72712 cri.go:89] found id: ""
	I0425 20:05:12.194051   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.194064   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:12.194072   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:12.194152   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:12.234680   72712 cri.go:89] found id: ""
	I0425 20:05:12.234710   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.234721   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:12.234728   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:12.234792   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:12.277751   72712 cri.go:89] found id: ""
	I0425 20:05:12.277783   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.277794   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:12.277802   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:12.277864   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:12.324068   72712 cri.go:89] found id: ""
	I0425 20:05:12.324100   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.324117   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:12.324125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:12.324187   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:10.374594   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.873217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:11.424208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.425980   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.387259   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.884337   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.366797   72712 cri.go:89] found id: ""
	I0425 20:05:12.366825   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.366837   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:12.366844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:12.366903   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:12.413092   72712 cri.go:89] found id: ""
	I0425 20:05:12.413120   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.413132   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:12.413139   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:12.413198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:12.461229   72712 cri.go:89] found id: ""
	I0425 20:05:12.461253   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.461262   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:12.461268   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:12.461333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:12.504646   72712 cri.go:89] found id: ""
	I0425 20:05:12.504669   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.504677   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:12.504685   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:12.504698   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:12.561630   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:12.561673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:12.578043   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:12.578069   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:12.655176   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:12.655195   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:12.655209   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:12.736323   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:12.736357   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.287503   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:15.302830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:15.302893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:15.339479   72712 cri.go:89] found id: ""
	I0425 20:05:15.339509   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.339519   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:15.339527   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:15.339589   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:15.381431   72712 cri.go:89] found id: ""
	I0425 20:05:15.381458   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.381467   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:15.381475   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:15.381537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:15.423729   72712 cri.go:89] found id: ""
	I0425 20:05:15.423755   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.423767   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:15.423774   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:15.423833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:15.464367   72712 cri.go:89] found id: ""
	I0425 20:05:15.464401   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.464413   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:15.464421   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:15.464489   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:15.508306   72712 cri.go:89] found id: ""
	I0425 20:05:15.508336   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.508348   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:15.508356   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:15.508419   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:15.548572   72712 cri.go:89] found id: ""
	I0425 20:05:15.548600   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.548610   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:15.548616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:15.548678   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:15.592885   72712 cri.go:89] found id: ""
	I0425 20:05:15.592914   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.592926   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:15.592933   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:15.592992   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:15.632817   72712 cri.go:89] found id: ""
	I0425 20:05:15.632855   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.632868   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:15.632880   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:15.632900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:15.648443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:15.648470   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:15.726167   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:15.726191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:15.726229   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:15.803028   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:15.803066   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.850519   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:15.850552   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:14.873291   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:17.372118   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.917932   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.420096   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.384555   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.885930   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.404671   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:18.422600   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:18.422663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:18.476977   72712 cri.go:89] found id: ""
	I0425 20:05:18.477001   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.477009   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:18.477021   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:18.477093   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:18.525595   72712 cri.go:89] found id: ""
	I0425 20:05:18.525631   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.525641   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:18.525648   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:18.525714   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:18.565485   72712 cri.go:89] found id: ""
	I0425 20:05:18.565513   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.565523   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:18.565531   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:18.565600   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:18.612059   72712 cri.go:89] found id: ""
	I0425 20:05:18.612096   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.612106   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:18.612112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:18.612173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:18.659407   72712 cri.go:89] found id: ""
	I0425 20:05:18.659438   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.659449   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:18.659456   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:18.659507   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:18.701065   72712 cri.go:89] found id: ""
	I0425 20:05:18.701092   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.701101   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:18.701106   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:18.701201   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:18.738234   72712 cri.go:89] found id: ""
	I0425 20:05:18.738264   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.738276   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:18.738284   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:18.738343   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:18.780460   72712 cri.go:89] found id: ""
	I0425 20:05:18.780489   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.780498   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:18.780514   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:18.780526   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:18.834345   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:18.834378   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:18.850006   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:18.850033   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:18.932146   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:18.932171   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:18.932185   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:19.015036   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:19.015068   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:21.568250   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:21.582519   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:21.582595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:21.622886   72712 cri.go:89] found id: ""
	I0425 20:05:21.622913   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.622920   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:21.622925   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:21.622974   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:21.664832   72712 cri.go:89] found id: ""
	I0425 20:05:21.664860   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.664874   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:21.664882   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:21.664950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:21.703801   72712 cri.go:89] found id: ""
	I0425 20:05:21.703829   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.703843   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:21.703850   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:21.703911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:21.741502   72712 cri.go:89] found id: ""
	I0425 20:05:21.741540   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.741549   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:21.741555   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:21.741612   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:21.783715   72712 cri.go:89] found id: ""
	I0425 20:05:21.783745   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.783754   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:21.783759   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:21.783803   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:21.822806   72712 cri.go:89] found id: ""
	I0425 20:05:21.822842   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.822851   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:21.822856   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:21.822915   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:21.864996   72712 cri.go:89] found id: ""
	I0425 20:05:21.865020   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.865030   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:21.865037   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:21.865092   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:21.907533   72712 cri.go:89] found id: ""
	I0425 20:05:21.907563   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.907575   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:21.907585   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:21.907601   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:21.964226   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:21.964260   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:21.980096   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:21.980123   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:22.059516   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:22.059539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:22.059566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:22.136752   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:22.136784   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:19.373290   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:21.873377   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.916720   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:22.917156   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.918191   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:23.384566   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:25.885793   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.682139   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:24.697495   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:24.697564   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:24.739725   72712 cri.go:89] found id: ""
	I0425 20:05:24.739750   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.739760   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:24.739766   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:24.739824   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:24.777455   72712 cri.go:89] found id: ""
	I0425 20:05:24.777485   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.777497   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:24.777504   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:24.777566   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:24.821729   72712 cri.go:89] found id: ""
	I0425 20:05:24.821761   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.821774   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:24.821782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:24.821845   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:24.861745   72712 cri.go:89] found id: ""
	I0425 20:05:24.861773   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.861784   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:24.861791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:24.861851   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:24.903441   72712 cri.go:89] found id: ""
	I0425 20:05:24.903470   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.903479   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:24.903486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:24.903544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:24.943589   72712 cri.go:89] found id: ""
	I0425 20:05:24.943618   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.943629   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:24.943637   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:24.943717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:24.983629   72712 cri.go:89] found id: ""
	I0425 20:05:24.983661   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.983672   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:24.983680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:24.983739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:25.022413   72712 cri.go:89] found id: ""
	I0425 20:05:25.022441   72712 logs.go:276] 0 containers: []
	W0425 20:05:25.022451   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:25.022462   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:25.022477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:25.077402   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:25.077438   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:25.094488   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:25.094517   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:25.171485   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:25.171515   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:25.171535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:25.251131   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:25.251166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:24.373762   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:26.873969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.420395   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:29.420994   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:28.384247   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:30.883795   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.797359   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:27.813601   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:27.813659   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:27.854017   72712 cri.go:89] found id: ""
	I0425 20:05:27.854051   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.854061   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:27.854066   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:27.854117   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:27.900425   72712 cri.go:89] found id: ""
	I0425 20:05:27.900451   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.900461   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:27.900468   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:27.900531   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:27.940064   72712 cri.go:89] found id: ""
	I0425 20:05:27.940096   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.940107   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:27.940114   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:27.940174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:27.979363   72712 cri.go:89] found id: ""
	I0425 20:05:27.979385   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.979393   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:27.979399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:27.979442   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:28.019702   72712 cri.go:89] found id: ""
	I0425 20:05:28.019723   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.019731   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:28.019736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:28.019798   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:28.058711   72712 cri.go:89] found id: ""
	I0425 20:05:28.058740   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.058748   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:28.058755   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:28.058810   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:28.104465   72712 cri.go:89] found id: ""
	I0425 20:05:28.104495   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.104507   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:28.104515   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:28.104577   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:28.142399   72712 cri.go:89] found id: ""
	I0425 20:05:28.142431   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.142440   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:28.142449   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:28.142460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:28.222763   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:28.222786   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:28.222801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:28.299797   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:28.299838   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:28.366569   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:28.366594   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:28.424581   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:28.424628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:30.942526   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:30.957400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:30.957482   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:30.996931   72712 cri.go:89] found id: ""
	I0425 20:05:30.996958   72712 logs.go:276] 0 containers: []
	W0425 20:05:30.996967   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:30.996974   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:30.997029   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:31.035673   72712 cri.go:89] found id: ""
	I0425 20:05:31.035700   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.035712   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:31.035719   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:31.035782   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:31.075783   72712 cri.go:89] found id: ""
	I0425 20:05:31.075809   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.075820   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:31.075826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:31.075886   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:31.114229   72712 cri.go:89] found id: ""
	I0425 20:05:31.114257   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.114267   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:31.114274   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:31.114333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:31.155385   72712 cri.go:89] found id: ""
	I0425 20:05:31.155409   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.155419   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:31.155427   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:31.155486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:31.193772   72712 cri.go:89] found id: ""
	I0425 20:05:31.193804   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.193815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:31.193823   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:31.193878   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:31.233886   72712 cri.go:89] found id: ""
	I0425 20:05:31.233909   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.233917   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:31.233923   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:31.233967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:31.273427   72712 cri.go:89] found id: ""
	I0425 20:05:31.273455   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.273465   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:31.273476   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:31.273491   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:31.354429   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:31.354462   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:31.406018   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:31.406047   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:31.460972   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:31.461007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:31.477485   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:31.477513   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:31.551616   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:29.371357   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.373007   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.421948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.424866   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.384577   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.884780   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:34.052808   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:34.068068   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:34.068158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:34.120984   72712 cri.go:89] found id: ""
	I0425 20:05:34.121016   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.121024   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:34.121032   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:34.121082   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:34.160646   72712 cri.go:89] found id: ""
	I0425 20:05:34.160676   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.160687   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:34.160694   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:34.160752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:34.202641   72712 cri.go:89] found id: ""
	I0425 20:05:34.202665   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.202671   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:34.202677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:34.202733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:34.244352   72712 cri.go:89] found id: ""
	I0425 20:05:34.244379   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.244391   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:34.244400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:34.244460   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:34.285858   72712 cri.go:89] found id: ""
	I0425 20:05:34.285885   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.285896   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:34.285904   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:34.285956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:34.323634   72712 cri.go:89] found id: ""
	I0425 20:05:34.323662   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.323673   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:34.323681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:34.323739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:34.365230   72712 cri.go:89] found id: ""
	I0425 20:05:34.365256   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.365272   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:34.365280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:34.365339   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:34.409329   72712 cri.go:89] found id: ""
	I0425 20:05:34.409354   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.409365   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:34.409376   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:34.409390   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:34.464575   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:34.464606   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:34.480244   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:34.480270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:34.560204   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:34.560224   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:34.560236   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:34.640152   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:34.640187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:37.189992   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:37.204683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:37.204786   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:37.245857   72712 cri.go:89] found id: ""
	I0425 20:05:37.245891   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.245903   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:37.245910   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:37.245969   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:37.284668   72712 cri.go:89] found id: ""
	I0425 20:05:37.284696   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.284704   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:37.284710   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:37.284762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:37.324349   72712 cri.go:89] found id: ""
	I0425 20:05:37.324379   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.324391   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:37.324399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:37.324461   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:33.872836   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.873214   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.373278   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.917308   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.419746   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.383933   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.385166   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:37.361764   72712 cri.go:89] found id: ""
	I0425 20:05:37.361787   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.361800   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:37.361811   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:37.361857   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:37.404331   72712 cri.go:89] found id: ""
	I0425 20:05:37.404353   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.404360   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:37.404366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:37.404430   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:37.445284   72712 cri.go:89] found id: ""
	I0425 20:05:37.445316   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.445327   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:37.445334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:37.445395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:37.483806   72712 cri.go:89] found id: ""
	I0425 20:05:37.483828   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.483837   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:37.483843   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:37.483888   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:37.524649   72712 cri.go:89] found id: ""
	I0425 20:05:37.524673   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.524680   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:37.524689   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:37.524701   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:37.581521   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:37.581553   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:37.598459   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:37.598487   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:37.671236   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:37.671256   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:37.671272   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:37.750517   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:37.750556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.293743   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:40.310344   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:40.310426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:40.356157   72712 cri.go:89] found id: ""
	I0425 20:05:40.356198   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.356208   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:40.356215   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:40.356277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:40.397857   72712 cri.go:89] found id: ""
	I0425 20:05:40.397886   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.397895   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:40.397902   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:40.397964   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:40.445034   72712 cri.go:89] found id: ""
	I0425 20:05:40.445057   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.445065   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:40.445071   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:40.445126   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:40.493744   72712 cri.go:89] found id: ""
	I0425 20:05:40.493773   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.493783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:40.493797   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:40.493856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:40.550546   72712 cri.go:89] found id: ""
	I0425 20:05:40.550572   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.550580   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:40.550587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:40.550654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:40.605122   72712 cri.go:89] found id: ""
	I0425 20:05:40.605153   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.605164   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:40.605172   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:40.605232   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:40.675713   72712 cri.go:89] found id: ""
	I0425 20:05:40.675745   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.675755   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:40.675769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:40.675828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:40.716064   72712 cri.go:89] found id: ""
	I0425 20:05:40.716093   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.716101   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:40.716109   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:40.716120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:40.781395   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:40.781441   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:40.797597   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:40.797628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:40.880931   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:40.880956   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:40.880971   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:40.970770   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:40.970800   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.373398   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.873163   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.918560   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.417610   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:45.420963   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.883556   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:44.883719   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.520389   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:43.537668   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:43.537729   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:43.578137   72712 cri.go:89] found id: ""
	I0425 20:05:43.578166   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.578175   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:43.578180   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:43.578247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:43.617428   72712 cri.go:89] found id: ""
	I0425 20:05:43.617454   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.617462   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:43.617466   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:43.617519   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:43.655401   72712 cri.go:89] found id: ""
	I0425 20:05:43.655431   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.655443   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:43.655450   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:43.655514   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:43.695183   72712 cri.go:89] found id: ""
	I0425 20:05:43.695212   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.695230   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:43.695238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:43.695316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:43.735056   72712 cri.go:89] found id: ""
	I0425 20:05:43.735086   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.735098   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:43.735104   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:43.735162   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:43.774761   72712 cri.go:89] found id: ""
	I0425 20:05:43.774789   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.774799   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:43.774830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:43.774889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:43.819102   72712 cri.go:89] found id: ""
	I0425 20:05:43.819128   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.819138   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:43.819146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:43.819206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:43.858235   72712 cri.go:89] found id: ""
	I0425 20:05:43.858267   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.858278   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:43.858289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:43.858303   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:43.940756   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:43.940794   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:43.985878   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:43.985925   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:44.040177   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:44.040207   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:44.055912   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:44.055942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:44.143724   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:46.643923   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:46.658863   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:46.658941   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:46.697826   72712 cri.go:89] found id: ""
	I0425 20:05:46.697850   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.697858   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:46.697884   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:46.697947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:46.739850   72712 cri.go:89] found id: ""
	I0425 20:05:46.739877   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.739888   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:46.739897   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:46.739955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:46.781212   72712 cri.go:89] found id: ""
	I0425 20:05:46.781241   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.781256   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:46.781262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:46.781321   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:46.826005   72712 cri.go:89] found id: ""
	I0425 20:05:46.826036   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.826047   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:46.826055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:46.826109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:46.865428   72712 cri.go:89] found id: ""
	I0425 20:05:46.865456   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.865465   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:46.865472   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:46.865522   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:46.914860   72712 cri.go:89] found id: ""
	I0425 20:05:46.914887   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.914897   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:46.914907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:46.914968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:46.955323   72712 cri.go:89] found id: ""
	I0425 20:05:46.955355   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.955365   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:46.955373   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:46.955436   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:46.999369   72712 cri.go:89] found id: ""
	I0425 20:05:46.999396   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.999408   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:46.999419   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:46.999464   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:47.013865   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:47.013893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:47.094725   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:47.094755   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:47.094771   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:47.178380   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:47.178426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:47.227217   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:47.227249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:45.375272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.872640   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.917579   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.918001   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:46.884746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:48.884818   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.780217   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:49.795690   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:49.795760   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:49.834909   72712 cri.go:89] found id: ""
	I0425 20:05:49.834935   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.834943   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:49.834951   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:49.835004   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:49.872717   72712 cri.go:89] found id: ""
	I0425 20:05:49.872747   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.872755   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:49.872762   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:49.872807   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:49.919348   72712 cri.go:89] found id: ""
	I0425 20:05:49.919376   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.919387   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:49.919395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:49.919465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:49.959673   72712 cri.go:89] found id: ""
	I0425 20:05:49.959705   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.959716   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:49.959728   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:49.959796   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:49.999276   72712 cri.go:89] found id: ""
	I0425 20:05:49.999299   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.999306   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:49.999312   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:49.999361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:50.037426   72712 cri.go:89] found id: ""
	I0425 20:05:50.037454   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.037461   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:50.037466   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:50.037510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:50.080666   72712 cri.go:89] found id: ""
	I0425 20:05:50.080695   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.080703   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:50.080719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:50.080776   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:50.126065   72712 cri.go:89] found id: ""
	I0425 20:05:50.126111   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.126123   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:50.126134   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:50.126148   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:50.140778   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:50.140805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:50.213282   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:50.213308   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:50.213320   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:50.293798   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:50.293832   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:50.336823   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:50.336859   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:49.873685   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.372830   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.919781   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:54.417518   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.382698   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:53.392894   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:55.884231   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.892579   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:52.909556   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:52.909629   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:52.948098   72712 cri.go:89] found id: ""
	I0425 20:05:52.948127   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.948138   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:52.948146   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:52.948206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:52.988813   72712 cri.go:89] found id: ""
	I0425 20:05:52.988840   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.988848   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:52.988853   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:52.988898   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:53.032181   72712 cri.go:89] found id: ""
	I0425 20:05:53.032211   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.032222   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:53.032230   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:53.032288   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:53.075496   72712 cri.go:89] found id: ""
	I0425 20:05:53.075528   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.075538   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:53.075543   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:53.075599   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:53.119037   72712 cri.go:89] found id: ""
	I0425 20:05:53.119070   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.119082   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:53.119095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:53.119158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:53.158276   72712 cri.go:89] found id: ""
	I0425 20:05:53.158303   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.158314   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:53.158321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:53.158381   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:53.196168   72712 cri.go:89] found id: ""
	I0425 20:05:53.196199   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.196211   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:53.196219   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:53.196277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:53.235212   72712 cri.go:89] found id: ""
	I0425 20:05:53.235235   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.235243   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:53.235250   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:53.235261   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:53.290435   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:53.290474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:53.306351   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:53.306380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:53.388623   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:53.388652   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:53.388666   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:53.480388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:53.480426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:56.027403   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:56.042683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:56.042755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:56.083672   72712 cri.go:89] found id: ""
	I0425 20:05:56.083706   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.083718   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:56.083725   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:56.083790   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:56.124071   72712 cri.go:89] found id: ""
	I0425 20:05:56.124105   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.124126   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:56.124134   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:56.124200   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:56.166692   72712 cri.go:89] found id: ""
	I0425 20:05:56.166724   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.166737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:56.166744   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:56.166808   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:56.203833   72712 cri.go:89] found id: ""
	I0425 20:05:56.203871   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.203884   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:56.203892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:56.203950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:56.242277   72712 cri.go:89] found id: ""
	I0425 20:05:56.242319   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.242341   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:56.242349   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:56.242416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:56.281697   72712 cri.go:89] found id: ""
	I0425 20:05:56.281726   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.281733   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:56.281739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:56.281812   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:56.322190   72712 cri.go:89] found id: ""
	I0425 20:05:56.322233   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.322243   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:56.322248   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:56.322310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:56.364831   72712 cri.go:89] found id: ""
	I0425 20:05:56.364853   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.364864   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:56.364875   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:56.364889   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:56.422824   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:56.422856   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:56.437619   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:56.437641   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:56.512938   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:56.512961   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:56.512977   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:56.598670   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:56.598708   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:54.872566   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.873184   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.917352   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.421645   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:58.383740   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:00.384113   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.150322   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:59.166883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:59.166956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:59.205086   72712 cri.go:89] found id: ""
	I0425 20:05:59.205112   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.205121   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:59.205126   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:59.205199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:59.253430   72712 cri.go:89] found id: ""
	I0425 20:05:59.253458   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.253469   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:59.253478   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:59.253539   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:59.293691   72712 cri.go:89] found id: ""
	I0425 20:05:59.293719   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.293731   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:59.293738   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:59.293801   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:59.331580   72712 cri.go:89] found id: ""
	I0425 20:05:59.331604   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.331613   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:59.331619   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:59.331663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:59.369985   72712 cri.go:89] found id: ""
	I0425 20:05:59.370012   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.370023   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:59.370031   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:59.370095   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:59.411636   72712 cri.go:89] found id: ""
	I0425 20:05:59.411662   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.411670   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:59.411676   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:59.411733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:59.454735   72712 cri.go:89] found id: ""
	I0425 20:05:59.454762   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.454774   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:59.454782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:59.454839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:59.497664   72712 cri.go:89] found id: ""
	I0425 20:05:59.497694   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.497704   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:59.497715   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:59.497731   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:59.556694   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:59.556728   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:59.572160   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:59.572187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:59.649040   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:59.649063   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:59.649083   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:59.727941   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:59.727975   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.275513   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:02.290486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:02.290557   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:02.332217   72712 cri.go:89] found id: ""
	I0425 20:06:02.332255   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.332273   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:02.332281   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:02.332357   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:58.873314   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.373601   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.916947   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.418479   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.384744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.885488   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.373346   72712 cri.go:89] found id: ""
	I0425 20:06:02.373370   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.373377   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:02.373382   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:02.373439   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:02.415835   72712 cri.go:89] found id: ""
	I0425 20:06:02.415861   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.415873   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:02.415881   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:02.415939   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:02.458876   72712 cri.go:89] found id: ""
	I0425 20:06:02.458905   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.458917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:02.458926   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:02.459008   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:02.502092   72712 cri.go:89] found id: ""
	I0425 20:06:02.502127   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.502138   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:02.502146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:02.502235   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:02.546357   72712 cri.go:89] found id: ""
	I0425 20:06:02.546383   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.546393   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:02.546399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:02.546459   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:02.586842   72712 cri.go:89] found id: ""
	I0425 20:06:02.586870   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.586881   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:02.586887   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:02.586932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:02.629305   72712 cri.go:89] found id: ""
	I0425 20:06:02.629339   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.629350   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:02.629360   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:02.629374   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.676583   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:02.676626   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:02.731790   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:02.731825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:02.747473   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:02.747499   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:02.824265   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:02.824289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:02.824304   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.408968   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:05.423645   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:05.423713   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:05.467402   72712 cri.go:89] found id: ""
	I0425 20:06:05.467425   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.467434   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:05.467445   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:05.467510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:05.503131   72712 cri.go:89] found id: ""
	I0425 20:06:05.503153   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.503161   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:05.503166   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:05.503216   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:05.545694   72712 cri.go:89] found id: ""
	I0425 20:06:05.545721   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.545732   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:05.545739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:05.545804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:05.585879   72712 cri.go:89] found id: ""
	I0425 20:06:05.585905   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.585912   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:05.585917   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:05.585963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:05.625520   72712 cri.go:89] found id: ""
	I0425 20:06:05.625549   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.625560   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:05.625567   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:05.625620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:05.664306   72712 cri.go:89] found id: ""
	I0425 20:06:05.664335   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.664345   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:05.664364   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:05.664437   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:05.705353   72712 cri.go:89] found id: ""
	I0425 20:06:05.705385   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.705397   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:05.705405   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:05.705468   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:05.743935   72712 cri.go:89] found id: ""
	I0425 20:06:05.743968   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.743977   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:05.743986   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:05.743997   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:05.801190   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:05.801234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:05.817046   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:05.817074   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:05.899413   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:05.899443   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:05.899458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.986303   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:05.986336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:03.872605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:05.876833   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.373392   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.916334   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.917480   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.887784   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:09.387085   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.531748   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:08.550667   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:08.550749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:08.594062   72712 cri.go:89] found id: ""
	I0425 20:06:08.594093   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.594102   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:08.594108   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:08.594163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:08.635823   72712 cri.go:89] found id: ""
	I0425 20:06:08.635861   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.635872   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:08.635880   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:08.635944   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:08.675338   72712 cri.go:89] found id: ""
	I0425 20:06:08.675383   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.675395   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:08.675402   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:08.675463   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:08.715971   72712 cri.go:89] found id: ""
	I0425 20:06:08.716001   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.716012   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:08.716019   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:08.716088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:08.758565   72712 cri.go:89] found id: ""
	I0425 20:06:08.758597   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.758608   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:08.758616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:08.758683   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:08.800179   72712 cri.go:89] found id: ""
	I0425 20:06:08.800207   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.800218   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:08.800226   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:08.800286   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:08.854603   72712 cri.go:89] found id: ""
	I0425 20:06:08.854639   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.854651   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:08.854659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:08.854724   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:08.904115   72712 cri.go:89] found id: ""
	I0425 20:06:08.904141   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.904152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:08.904162   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:08.904177   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:08.921826   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:08.921855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:09.003667   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:09.003687   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:09.003699   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:09.086301   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:09.086346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:09.138478   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:09.138516   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:11.704402   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:11.721810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:11.721902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:11.768790   72712 cri.go:89] found id: ""
	I0425 20:06:11.768829   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.768850   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:11.768858   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:11.768928   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:11.813543   72712 cri.go:89] found id: ""
	I0425 20:06:11.813576   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.813588   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:11.813595   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:11.813654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:11.853930   72712 cri.go:89] found id: ""
	I0425 20:06:11.853962   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.853972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:11.853980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:11.854044   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:11.900808   72712 cri.go:89] found id: ""
	I0425 20:06:11.900843   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.900853   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:11.900861   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:11.900919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:11.948850   72712 cri.go:89] found id: ""
	I0425 20:06:11.948876   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.948885   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:11.948890   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:11.948945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:11.989326   72712 cri.go:89] found id: ""
	I0425 20:06:11.989356   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.989365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:11.989371   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:11.989450   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:12.033912   72712 cri.go:89] found id: ""
	I0425 20:06:12.033943   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.033954   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:12.033959   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:12.034015   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:12.076170   72712 cri.go:89] found id: ""
	I0425 20:06:12.076199   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.076209   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:12.076217   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:12.076230   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:12.124851   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:12.124881   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:12.178927   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:12.178964   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:12.194925   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:12.194952   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:12.272163   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:12.272187   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:12.272202   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:10.374908   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.871613   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:10.917911   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.918144   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:15.419043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:11.886066   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.383880   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.851400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:14.869893   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:14.869967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:14.915793   72712 cri.go:89] found id: ""
	I0425 20:06:14.915820   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.915829   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:14.915836   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:14.915896   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:14.959549   72712 cri.go:89] found id: ""
	I0425 20:06:14.959576   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.959587   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:14.959606   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:14.959672   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:15.001420   72712 cri.go:89] found id: ""
	I0425 20:06:15.001453   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.001465   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:15.001474   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:15.001552   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:15.047960   72712 cri.go:89] found id: ""
	I0425 20:06:15.047988   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.047996   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:15.048001   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:15.048049   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:15.096688   72712 cri.go:89] found id: ""
	I0425 20:06:15.096722   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.096730   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:15.096736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:15.096795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:15.142673   72712 cri.go:89] found id: ""
	I0425 20:06:15.142701   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.142712   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:15.142719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:15.142784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:15.181729   72712 cri.go:89] found id: ""
	I0425 20:06:15.181757   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.181766   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:15.181773   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:15.181820   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:15.227858   72712 cri.go:89] found id: ""
	I0425 20:06:15.227886   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.227897   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:15.227905   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:15.227917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:15.283253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:15.283293   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:15.305572   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:15.305604   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:15.439587   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:15.439615   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:15.439631   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:15.525678   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:15.525714   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:14.872914   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.873605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:17.420065   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:19.917501   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.383915   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.883746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:20.884190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.078788   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:18.095012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:18.095083   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:18.136753   72712 cri.go:89] found id: ""
	I0425 20:06:18.136784   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.136796   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:18.136802   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:18.136850   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:18.184584   72712 cri.go:89] found id: ""
	I0425 20:06:18.184606   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.184614   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:18.184619   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:18.184691   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:18.228201   72712 cri.go:89] found id: ""
	I0425 20:06:18.228250   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.228263   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:18.228270   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:18.228326   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:18.267756   72712 cri.go:89] found id: ""
	I0425 20:06:18.267778   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.267786   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:18.267792   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:18.267855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:18.309727   72712 cri.go:89] found id: ""
	I0425 20:06:18.309755   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.309763   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:18.309769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:18.309827   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:18.350549   72712 cri.go:89] found id: ""
	I0425 20:06:18.350580   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.350592   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:18.350599   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:18.350656   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:18.393868   72712 cri.go:89] found id: ""
	I0425 20:06:18.393891   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.393902   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:18.393910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:18.393989   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:18.435163   72712 cri.go:89] found id: ""
	I0425 20:06:18.435195   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.435204   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:18.435211   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:18.435224   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:18.450871   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:18.450901   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:18.534501   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:18.534526   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:18.534538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:18.616979   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:18.617015   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:18.663568   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:18.663598   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.217744   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:21.235862   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:21.235955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:21.288966   72712 cri.go:89] found id: ""
	I0425 20:06:21.288996   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.289005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:21.289014   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:21.289075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:21.362068   72712 cri.go:89] found id: ""
	I0425 20:06:21.362092   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.362101   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:21.362108   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:21.362168   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:21.416870   72712 cri.go:89] found id: ""
	I0425 20:06:21.416894   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.416901   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:21.416907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:21.416956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:21.461465   72712 cri.go:89] found id: ""
	I0425 20:06:21.461495   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.461503   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:21.461508   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:21.461570   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:21.499985   72712 cri.go:89] found id: ""
	I0425 20:06:21.500014   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.500025   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:21.500032   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:21.500081   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:21.543725   72712 cri.go:89] found id: ""
	I0425 20:06:21.543764   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.543776   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:21.543784   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:21.543841   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:21.586535   72712 cri.go:89] found id: ""
	I0425 20:06:21.586566   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.586578   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:21.586587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:21.586644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:21.627885   72712 cri.go:89] found id: ""
	I0425 20:06:21.627912   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.627921   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:21.627929   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:21.627942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.685973   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:21.686006   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:21.702529   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:21.702556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:21.781634   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:21.781660   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:21.781673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:21.862986   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:21.863027   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:19.372142   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.374479   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.918699   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.419088   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:23.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:25.883438   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.413547   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:24.428247   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:24.428323   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:24.468708   72712 cri.go:89] found id: ""
	I0425 20:06:24.468757   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.468768   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:24.468775   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:24.468836   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:24.507667   72712 cri.go:89] found id: ""
	I0425 20:06:24.507694   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.507702   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:24.507708   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:24.507769   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:24.548537   72712 cri.go:89] found id: ""
	I0425 20:06:24.548562   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.548570   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:24.548576   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:24.548625   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:24.591240   72712 cri.go:89] found id: ""
	I0425 20:06:24.591264   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.591272   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:24.591280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:24.591325   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:24.631530   72712 cri.go:89] found id: ""
	I0425 20:06:24.631557   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.631568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:24.631575   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:24.631642   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:24.672878   72712 cri.go:89] found id: ""
	I0425 20:06:24.672903   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.672911   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:24.672916   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:24.672960   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:24.716168   72712 cri.go:89] found id: ""
	I0425 20:06:24.716193   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.716201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:24.716206   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:24.716256   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:24.758061   72712 cri.go:89] found id: ""
	I0425 20:06:24.758098   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.758110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:24.758122   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:24.758135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:24.839866   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:24.839900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:24.889288   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:24.889380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:24.946445   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:24.946488   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:24.963093   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:24.963126   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:25.044921   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:23.874297   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.372055   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.375436   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.916503   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.916669   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.887709   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.384645   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.545838   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:27.562659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:27.562717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:27.606462   72712 cri.go:89] found id: ""
	I0425 20:06:27.606491   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.606501   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:27.606509   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:27.606567   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:27.650475   72712 cri.go:89] found id: ""
	I0425 20:06:27.650505   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.650517   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:27.650524   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:27.650583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:27.695163   72712 cri.go:89] found id: ""
	I0425 20:06:27.695190   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.695201   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:27.695208   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:27.695265   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:27.741798   72712 cri.go:89] found id: ""
	I0425 20:06:27.741832   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.741842   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:27.741849   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:27.741904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:27.784146   72712 cri.go:89] found id: ""
	I0425 20:06:27.784175   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.784185   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:27.784193   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:27.784253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:27.827179   72712 cri.go:89] found id: ""
	I0425 20:06:27.827213   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.827225   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:27.827234   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:27.827298   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:27.872941   72712 cri.go:89] found id: ""
	I0425 20:06:27.872961   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.872980   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:27.872985   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:27.873040   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:27.917920   72712 cri.go:89] found id: ""
	I0425 20:06:27.917949   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.917959   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:27.917970   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:27.917985   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:27.971411   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:27.971455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:27.988704   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:27.988743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:28.064208   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:28.064229   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:28.064242   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:28.147388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:28.147427   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.694349   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:30.708595   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:30.708671   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:30.752963   72712 cri.go:89] found id: ""
	I0425 20:06:30.752994   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.753005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:30.753012   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:30.753073   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:30.795453   72712 cri.go:89] found id: ""
	I0425 20:06:30.795488   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.795498   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:30.795507   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:30.795574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:30.838945   72712 cri.go:89] found id: ""
	I0425 20:06:30.838970   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.838978   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:30.838984   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:30.839042   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:30.886128   72712 cri.go:89] found id: ""
	I0425 20:06:30.886160   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.886170   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:30.886178   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:30.886255   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:30.927773   72712 cri.go:89] found id: ""
	I0425 20:06:30.927805   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.927819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:30.927827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:30.927893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:30.968628   72712 cri.go:89] found id: ""
	I0425 20:06:30.968660   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.968672   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:30.968680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:30.968743   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:31.014590   72712 cri.go:89] found id: ""
	I0425 20:06:31.014616   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.014627   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:31.014634   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:31.014697   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:31.053236   72712 cri.go:89] found id: ""
	I0425 20:06:31.053262   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.053274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:31.053285   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:31.053301   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:31.107797   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:31.107834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:31.123675   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:31.123702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:31.201180   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:31.201204   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:31.201215   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:31.289474   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:31.289512   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.873981   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.373083   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.918572   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.420043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:35.421384   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:32.883164   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:34.883697   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.840828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:33.857736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:33.857795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:33.898621   72712 cri.go:89] found id: ""
	I0425 20:06:33.898647   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.898658   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:33.898665   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:33.898727   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:33.939211   72712 cri.go:89] found id: ""
	I0425 20:06:33.939234   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.939245   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:33.939250   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:33.939305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:33.981872   72712 cri.go:89] found id: ""
	I0425 20:06:33.981896   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.981903   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:33.981909   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:33.981965   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:34.027570   72712 cri.go:89] found id: ""
	I0425 20:06:34.027597   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.027609   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:34.027617   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:34.027675   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:34.072544   72712 cri.go:89] found id: ""
	I0425 20:06:34.072570   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.072586   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:34.072594   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:34.072674   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:34.119326   72712 cri.go:89] found id: ""
	I0425 20:06:34.119349   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.119358   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:34.119366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:34.119423   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:34.169618   72712 cri.go:89] found id: ""
	I0425 20:06:34.169642   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.169650   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:34.169655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:34.169705   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:34.213570   72712 cri.go:89] found id: ""
	I0425 20:06:34.213593   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.213601   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:34.213609   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:34.213621   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:34.255722   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:34.255756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:34.311113   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:34.311147   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:34.326869   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:34.326897   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:34.399765   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:34.399788   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:34.399801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:36.986610   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:37.003090   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:37.003163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:37.045929   72712 cri.go:89] found id: ""
	I0425 20:06:37.045956   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.045964   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:37.045969   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:37.046022   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:37.086835   72712 cri.go:89] found id: ""
	I0425 20:06:37.086868   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.086879   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:37.086885   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:37.086937   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:37.127454   72712 cri.go:89] found id: ""
	I0425 20:06:37.127479   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.127488   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:37.127494   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:37.127551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:37.168878   72712 cri.go:89] found id: ""
	I0425 20:06:37.168904   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.168917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:37.168924   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:37.168986   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:37.208859   72712 cri.go:89] found id: ""
	I0425 20:06:37.208889   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.208901   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:37.208914   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:37.208970   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:37.250407   72712 cri.go:89] found id: ""
	I0425 20:06:37.250439   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.250452   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:37.250467   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:37.250536   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:37.291004   72712 cri.go:89] found id: ""
	I0425 20:06:37.291040   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.291054   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:37.291063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:37.291125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:37.335573   72712 cri.go:89] found id: ""
	I0425 20:06:37.335597   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.335608   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:37.335619   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:37.335635   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:35.873065   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.371805   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.426152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:39.916340   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:36.884518   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.884859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.392773   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:37.392810   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:37.408311   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:37.408343   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:37.491376   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:37.491402   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:37.491416   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:37.574559   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:37.574600   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.125241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:40.142254   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:40.142347   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:40.186859   72712 cri.go:89] found id: ""
	I0425 20:06:40.186893   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.186904   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:40.186911   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:40.186972   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:40.229247   72712 cri.go:89] found id: ""
	I0425 20:06:40.229275   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.229288   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:40.229295   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:40.229361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:40.268853   72712 cri.go:89] found id: ""
	I0425 20:06:40.268879   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.268890   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:40.268897   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:40.268959   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:40.307621   72712 cri.go:89] found id: ""
	I0425 20:06:40.307650   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.307669   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:40.307677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:40.307732   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:40.351448   72712 cri.go:89] found id: ""
	I0425 20:06:40.351472   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.351484   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:40.351492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:40.351548   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:40.396771   72712 cri.go:89] found id: ""
	I0425 20:06:40.396804   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.396815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:40.396824   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:40.396890   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:40.443605   72712 cri.go:89] found id: ""
	I0425 20:06:40.443634   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.443642   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:40.443647   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:40.443694   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:40.495496   72712 cri.go:89] found id: ""
	I0425 20:06:40.495525   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.495536   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:40.495548   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:40.495563   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.539428   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:40.539457   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:40.596259   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:40.596305   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:40.613140   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:40.613167   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:40.701768   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:40.701793   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:40.701805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:40.372225   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:42.373541   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.916879   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.917783   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.386292   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.885441   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.294502   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:43.310041   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:43.310113   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:43.351841   72712 cri.go:89] found id: ""
	I0425 20:06:43.351864   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.351872   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:43.351877   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:43.351924   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:43.395467   72712 cri.go:89] found id: ""
	I0425 20:06:43.395497   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.395509   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:43.395516   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:43.395576   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:43.437256   72712 cri.go:89] found id: ""
	I0425 20:06:43.437354   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.437375   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:43.437384   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:43.437465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:43.480744   72712 cri.go:89] found id: ""
	I0425 20:06:43.480772   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.480783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:43.480791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:43.480839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:43.519916   72712 cri.go:89] found id: ""
	I0425 20:06:43.519951   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.519961   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:43.519975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:43.520039   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:43.557861   72712 cri.go:89] found id: ""
	I0425 20:06:43.557890   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.557901   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:43.557910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:43.557968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:43.594423   72712 cri.go:89] found id: ""
	I0425 20:06:43.594449   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.594458   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:43.594464   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:43.594512   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:43.632227   72712 cri.go:89] found id: ""
	I0425 20:06:43.632253   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.632262   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:43.632270   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:43.632281   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:43.688307   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:43.688336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:43.703382   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:43.703407   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:43.782073   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:43.782093   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:43.782109   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:43.872811   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:43.872842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:46.420420   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:46.435110   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:46.435174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:46.474019   72712 cri.go:89] found id: ""
	I0425 20:06:46.474044   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.474054   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:46.474067   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:46.474125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:46.517053   72712 cri.go:89] found id: ""
	I0425 20:06:46.517078   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.517088   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:46.517096   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:46.517150   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:46.560934   72712 cri.go:89] found id: ""
	I0425 20:06:46.560963   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.560972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:46.560977   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:46.561030   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:46.605969   72712 cri.go:89] found id: ""
	I0425 20:06:46.605997   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.606007   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:46.606012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:46.606061   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:46.647025   72712 cri.go:89] found id: ""
	I0425 20:06:46.647049   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.647058   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:46.647063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:46.647118   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:46.686931   72712 cri.go:89] found id: ""
	I0425 20:06:46.686956   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.686966   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:46.686975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:46.687053   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:46.727183   72712 cri.go:89] found id: ""
	I0425 20:06:46.727207   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.727216   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:46.727224   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:46.727277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:46.768030   72712 cri.go:89] found id: ""
	I0425 20:06:46.768059   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.768073   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:46.768085   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:46.768105   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:46.823400   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:46.823439   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:46.838443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:46.838468   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:46.919509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:46.919527   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:46.919538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:46.996250   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:46.996284   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:44.873706   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.874042   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:45.918619   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.418507   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.384559   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.884184   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.885081   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:49.542696   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:49.557346   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:49.557444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:49.595195   72712 cri.go:89] found id: ""
	I0425 20:06:49.595220   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.595231   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:49.595238   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:49.595305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:49.641324   72712 cri.go:89] found id: ""
	I0425 20:06:49.641354   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.641365   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:49.641373   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:49.641426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:49.681510   72712 cri.go:89] found id: ""
	I0425 20:06:49.681540   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.681552   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:49.681559   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:49.681620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:49.721482   72712 cri.go:89] found id: ""
	I0425 20:06:49.721509   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.721518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:49.721525   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:49.721581   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:49.762682   72712 cri.go:89] found id: ""
	I0425 20:06:49.762710   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.762723   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:49.762731   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:49.762793   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:49.801892   72712 cri.go:89] found id: ""
	I0425 20:06:49.801920   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.801932   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:49.801943   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:49.802002   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:49.840347   72712 cri.go:89] found id: ""
	I0425 20:06:49.840376   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.840387   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:49.840395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:49.840458   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:49.898486   72712 cri.go:89] found id: ""
	I0425 20:06:49.898516   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.898527   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:49.898536   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:49.898547   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:49.952735   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:49.952775   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:49.967986   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:49.968018   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:50.048003   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:50.048024   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:50.048040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:50.126062   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:50.126098   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:49.373031   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:51.873671   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.917641   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.418642   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.421542   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.384273   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.384393   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:52.679721   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:52.695636   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:52.695700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:52.738329   72712 cri.go:89] found id: ""
	I0425 20:06:52.738359   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.738368   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:52.738374   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:52.738420   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:52.779388   72712 cri.go:89] found id: ""
	I0425 20:06:52.779418   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.779426   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:52.779433   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:52.779496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:52.821105   72712 cri.go:89] found id: ""
	I0425 20:06:52.821137   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.821149   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:52.821168   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:52.821231   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:52.861781   72712 cri.go:89] found id: ""
	I0425 20:06:52.861817   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.861825   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:52.861831   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:52.861885   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:52.904602   72712 cri.go:89] found id: ""
	I0425 20:06:52.904633   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.904644   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:52.904651   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:52.904712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:52.951137   72712 cri.go:89] found id: ""
	I0425 20:06:52.951174   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.951183   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:52.951188   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:52.951234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:52.994199   72712 cri.go:89] found id: ""
	I0425 20:06:52.994249   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.994257   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:52.994262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:52.994315   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:53.031997   72712 cri.go:89] found id: ""
	I0425 20:06:53.032020   72712 logs.go:276] 0 containers: []
	W0425 20:06:53.032027   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:53.032035   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:53.032046   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:53.111351   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:53.111383   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:53.162470   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:53.162504   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:53.217188   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:53.217223   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:53.233071   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:53.233100   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:53.308983   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:55.809162   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:55.825185   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:55.825259   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:55.865963   72712 cri.go:89] found id: ""
	I0425 20:06:55.865989   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.866001   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:55.866009   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:55.866060   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:55.920565   72712 cri.go:89] found id: ""
	I0425 20:06:55.920601   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.920612   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:55.920620   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:55.920677   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:55.962643   72712 cri.go:89] found id: ""
	I0425 20:06:55.962669   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.962677   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:55.962684   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:55.962738   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:56.000737   72712 cri.go:89] found id: ""
	I0425 20:06:56.000764   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.000773   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:56.000782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:56.000828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:56.042226   72712 cri.go:89] found id: ""
	I0425 20:06:56.042251   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.042259   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:56.042265   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:56.042316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:56.080765   72712 cri.go:89] found id: ""
	I0425 20:06:56.080788   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.080798   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:56.080810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:56.080869   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:56.119563   72712 cri.go:89] found id: ""
	I0425 20:06:56.119590   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.119602   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:56.119608   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:56.119667   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:56.160136   72712 cri.go:89] found id: ""
	I0425 20:06:56.160162   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.160170   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:56.160179   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:56.160193   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:56.213506   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:56.213539   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:56.232121   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:56.232150   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:56.336606   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:56.336629   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:56.336640   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:56.426867   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:56.426908   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:54.374441   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:56.374847   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.916077   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.916521   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.384779   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.884281   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:58.975395   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:58.991064   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:58.991125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:59.031157   72712 cri.go:89] found id: ""
	I0425 20:06:59.031179   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.031190   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:59.031197   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:59.031253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:59.071893   72712 cri.go:89] found id: ""
	I0425 20:06:59.071923   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.071931   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:59.071937   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:59.071998   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:59.114714   72712 cri.go:89] found id: ""
	I0425 20:06:59.114749   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.114760   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:59.114768   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:59.114840   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:59.159482   72712 cri.go:89] found id: ""
	I0425 20:06:59.159510   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.159518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:59.159523   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:59.159575   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:59.201218   72712 cri.go:89] found id: ""
	I0425 20:06:59.201245   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.201253   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:59.201263   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:59.201312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:59.247277   72712 cri.go:89] found id: ""
	I0425 20:06:59.247305   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.247316   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:59.247324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:59.247379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:59.286713   72712 cri.go:89] found id: ""
	I0425 20:06:59.286738   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.286746   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:59.286751   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:59.286804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:59.332263   72712 cri.go:89] found id: ""
	I0425 20:06:59.332296   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.332320   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:59.332332   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:59.332346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:59.416446   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:59.416477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:59.462125   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:59.462166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:59.514881   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:59.514907   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:59.530109   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:59.530134   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:59.605820   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.106478   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:02.124859   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:02.124934   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:02.180491   72712 cri.go:89] found id: ""
	I0425 20:07:02.180526   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.180537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:02.180545   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:02.180601   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:02.237075   72712 cri.go:89] found id: ""
	I0425 20:07:02.237104   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.237118   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:02.237126   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:02.237190   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:02.295104   72712 cri.go:89] found id: ""
	I0425 20:07:02.295129   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.295140   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:02.295148   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:02.295210   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:02.335392   72712 cri.go:89] found id: ""
	I0425 20:07:02.335418   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.335428   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:02.335435   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:02.335496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:58.871748   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.372545   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.373424   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.917135   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.917504   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.885744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:04.385280   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:02.376964   72712 cri.go:89] found id: ""
	I0425 20:07:02.376990   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.377002   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:02.377009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:02.377066   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:02.415460   72712 cri.go:89] found id: ""
	I0425 20:07:02.415484   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.415491   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:02.415496   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:02.415550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:02.461946   72712 cri.go:89] found id: ""
	I0425 20:07:02.461972   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.461993   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:02.462009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:02.462075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:02.502829   72712 cri.go:89] found id: ""
	I0425 20:07:02.502851   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.502858   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:02.502866   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:02.502878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:02.558264   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:02.558296   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:02.574175   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:02.574225   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:02.649363   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.649389   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:02.649404   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:02.730528   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:02.730560   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.276648   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:05.292055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:05.292121   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:05.332849   72712 cri.go:89] found id: ""
	I0425 20:07:05.332874   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.332884   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:05.332892   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:05.332954   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:05.376446   72712 cri.go:89] found id: ""
	I0425 20:07:05.376475   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.376487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:05.376494   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:05.376556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:05.418635   72712 cri.go:89] found id: ""
	I0425 20:07:05.418664   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.418675   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:05.418682   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:05.418745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:05.459082   72712 cri.go:89] found id: ""
	I0425 20:07:05.459113   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.459123   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:05.459128   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:05.459175   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:05.498473   72712 cri.go:89] found id: ""
	I0425 20:07:05.498502   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.498514   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:05.498521   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:05.498578   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:05.543121   72712 cri.go:89] found id: ""
	I0425 20:07:05.543150   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.543159   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:05.543164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:05.543211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:05.585722   72712 cri.go:89] found id: ""
	I0425 20:07:05.585748   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.585758   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:05.585766   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:05.585826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:05.629614   72712 cri.go:89] found id: ""
	I0425 20:07:05.629647   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.629661   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:05.629671   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:05.629685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:05.683974   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:05.684007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:05.700651   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:05.700685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:05.782097   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:05.782127   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:05.782142   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:05.863881   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:05.863918   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.374553   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:07.872114   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.417080   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.417436   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:10.418259   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.885509   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:09.383078   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.412898   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:08.428152   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:08.428206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:08.468403   72712 cri.go:89] found id: ""
	I0425 20:07:08.468441   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.468455   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:08.468464   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:08.468529   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:08.511246   72712 cri.go:89] found id: ""
	I0425 20:07:08.511285   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.511297   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:08.511304   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:08.511363   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:08.553121   72712 cri.go:89] found id: ""
	I0425 20:07:08.553148   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.553155   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:08.553161   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:08.553214   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:08.589723   72712 cri.go:89] found id: ""
	I0425 20:07:08.589745   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.589755   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:08.589762   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:08.589826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:08.629502   72712 cri.go:89] found id: ""
	I0425 20:07:08.629525   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.629533   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:08.629538   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:08.629591   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:08.677107   72712 cri.go:89] found id: ""
	I0425 20:07:08.677144   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.677153   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:08.677164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:08.677212   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:08.716501   72712 cri.go:89] found id: ""
	I0425 20:07:08.716531   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.716542   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:08.716550   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:08.716611   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:08.763473   72712 cri.go:89] found id: ""
	I0425 20:07:08.763503   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.763515   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:08.763526   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:08.763543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:08.848961   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:08.848985   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:08.849000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:08.945851   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:08.945890   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:08.989429   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:08.989460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:09.042721   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:09.042756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.559400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:11.575100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:11.575180   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:11.613246   72712 cri.go:89] found id: ""
	I0425 20:07:11.613271   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.613284   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:11.613290   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:11.613351   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:11.655158   72712 cri.go:89] found id: ""
	I0425 20:07:11.655189   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.655200   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:11.655208   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:11.655266   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:11.695122   72712 cri.go:89] found id: ""
	I0425 20:07:11.695144   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.695151   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:11.695156   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:11.695205   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:11.735578   72712 cri.go:89] found id: ""
	I0425 20:07:11.735604   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.735615   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:11.735621   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:11.735680   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:11.774750   72712 cri.go:89] found id: ""
	I0425 20:07:11.774785   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.774795   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:11.774803   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:11.774855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:11.814878   72712 cri.go:89] found id: ""
	I0425 20:07:11.814908   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.814920   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:11.814939   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:11.815000   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:11.853262   72712 cri.go:89] found id: ""
	I0425 20:07:11.853295   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.853306   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:11.853313   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:11.853379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:11.897291   72712 cri.go:89] found id: ""
	I0425 20:07:11.897314   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.897324   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:11.897333   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:11.897348   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:11.956913   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:11.956945   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.973787   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:11.973821   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:12.055801   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:12.055826   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:12.055842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:12.140238   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:12.140270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:10.372634   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.374037   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.418299   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.919967   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:11.383994   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:13.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:15.884319   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.685296   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:14.699655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:14.699740   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:14.741907   72712 cri.go:89] found id: ""
	I0425 20:07:14.741936   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.741947   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:14.741955   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:14.742017   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:14.786457   72712 cri.go:89] found id: ""
	I0425 20:07:14.786479   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.786487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:14.786493   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:14.786537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:14.825010   72712 cri.go:89] found id: ""
	I0425 20:07:14.825042   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.825055   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:14.825063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:14.825124   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:14.874834   72712 cri.go:89] found id: ""
	I0425 20:07:14.874856   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.874867   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:14.874875   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:14.874933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:14.914636   72712 cri.go:89] found id: ""
	I0425 20:07:14.914674   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.914685   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:14.914693   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:14.914752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:14.959327   72712 cri.go:89] found id: ""
	I0425 20:07:14.959356   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.959365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:14.959372   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:14.959425   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:15.000637   72712 cri.go:89] found id: ""
	I0425 20:07:15.000666   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.000674   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:15.000680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:15.000728   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:15.040497   72712 cri.go:89] found id: ""
	I0425 20:07:15.040523   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.040531   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:15.040539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:15.040550   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:15.120206   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:15.120240   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:15.168292   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:15.168324   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:15.222133   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:15.222164   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:15.237719   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:15.237746   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:15.323404   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:14.872743   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.375231   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.420149   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:19.420277   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:18.384902   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:20.883469   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.823552   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:17.838837   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:17.838911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:17.880547   72712 cri.go:89] found id: ""
	I0425 20:07:17.880584   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.880595   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:17.880608   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:17.880669   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:17.929700   72712 cri.go:89] found id: ""
	I0425 20:07:17.929730   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.929742   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:17.929797   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:17.929861   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:17.974057   72712 cri.go:89] found id: ""
	I0425 20:07:17.974081   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.974088   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:17.974094   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:17.974142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:18.013173   72712 cri.go:89] found id: ""
	I0425 20:07:18.013200   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.013209   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:18.013215   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:18.013267   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:18.053525   72712 cri.go:89] found id: ""
	I0425 20:07:18.053557   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.053568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:18.053580   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:18.053644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:18.095972   72712 cri.go:89] found id: ""
	I0425 20:07:18.096004   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.096016   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:18.096024   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:18.096089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:18.136792   72712 cri.go:89] found id: ""
	I0425 20:07:18.136823   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.136834   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:18.136842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:18.136904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:18.176562   72712 cri.go:89] found id: ""
	I0425 20:07:18.176594   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.176605   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:18.176619   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:18.176634   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:18.254402   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:18.254440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:18.298075   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:18.298112   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:18.356091   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:18.356124   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:18.373788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:18.373822   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:18.452545   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:20.952752   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:20.972054   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:20.972133   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:21.015572   72712 cri.go:89] found id: ""
	I0425 20:07:21.015602   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.015613   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:21.015621   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:21.015689   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:21.053313   72712 cri.go:89] found id: ""
	I0425 20:07:21.053342   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.053352   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:21.053359   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:21.053422   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:21.090343   72712 cri.go:89] found id: ""
	I0425 20:07:21.090373   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.090384   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:21.090391   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:21.090472   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:21.127148   72712 cri.go:89] found id: ""
	I0425 20:07:21.127174   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.127184   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:21.127192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:21.127258   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:21.167175   72712 cri.go:89] found id: ""
	I0425 20:07:21.167199   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.167207   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:21.167212   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:21.167263   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:21.212740   72712 cri.go:89] found id: ""
	I0425 20:07:21.212771   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.212783   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:21.212791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:21.212856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:21.250751   72712 cri.go:89] found id: ""
	I0425 20:07:21.250774   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.250782   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:21.250788   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:21.250833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:21.292387   72712 cri.go:89] found id: ""
	I0425 20:07:21.292414   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.292426   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:21.292436   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:21.292451   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:21.337695   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:21.337726   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:21.395479   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:21.395520   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:21.411538   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:21.411564   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:21.493248   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:21.493270   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:21.493282   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:19.873680   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.372461   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:21.421770   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:23.426808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.883520   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.884554   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.076755   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:24.093549   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:24.093624   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:24.135660   72712 cri.go:89] found id: ""
	I0425 20:07:24.135686   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.135694   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:24.135705   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:24.135784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:24.179778   72712 cri.go:89] found id: ""
	I0425 20:07:24.179799   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.179807   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:24.179824   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:24.179883   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.226745   72712 cri.go:89] found id: ""
	I0425 20:07:24.226771   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.226780   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:24.226785   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:24.226839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:24.273302   72712 cri.go:89] found id: ""
	I0425 20:07:24.273327   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.273347   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:24.273354   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:24.273421   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:24.314117   72712 cri.go:89] found id: ""
	I0425 20:07:24.314149   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.314160   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:24.314167   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:24.314247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:24.353144   72712 cri.go:89] found id: ""
	I0425 20:07:24.353173   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.353184   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:24.353192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:24.353292   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:24.395899   72712 cri.go:89] found id: ""
	I0425 20:07:24.395925   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.395933   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:24.395938   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:24.395988   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:24.444470   72712 cri.go:89] found id: ""
	I0425 20:07:24.444503   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.444514   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:24.444525   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:24.444540   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:24.499845   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:24.499876   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:24.517421   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:24.517449   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:24.596509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:24.596530   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:24.596543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:24.710844   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:24.710878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.259541   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:27.275551   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:27.275609   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:27.314610   72712 cri.go:89] found id: ""
	I0425 20:07:27.314640   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.314651   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:27.314656   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:27.314712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:27.350100   72712 cri.go:89] found id: ""
	I0425 20:07:27.350132   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.350151   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:27.350158   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:27.350226   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.373886   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:26.873863   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:25.917794   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:28.417757   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:30.419922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.384565   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:29.385043   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.390197   72712 cri.go:89] found id: ""
	I0425 20:07:27.390238   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.390249   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:27.390257   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:27.390312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:27.431936   72712 cri.go:89] found id: ""
	I0425 20:07:27.431961   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.431973   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:27.431980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:27.432038   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:27.469175   72712 cri.go:89] found id: ""
	I0425 20:07:27.469204   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.469212   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:27.469218   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:27.469276   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:27.509385   72712 cri.go:89] found id: ""
	I0425 20:07:27.509416   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.509428   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:27.509436   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:27.509503   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:27.548997   72712 cri.go:89] found id: ""
	I0425 20:07:27.549034   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.549045   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:27.549052   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:27.549111   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:27.588925   72712 cri.go:89] found id: ""
	I0425 20:07:27.588959   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.588973   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:27.588985   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:27.589000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.635005   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:27.635040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:27.686587   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:27.686617   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:27.702913   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:27.702942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:27.775525   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:27.775551   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:27.775562   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.352358   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:30.367016   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:30.367088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:30.410878   72712 cri.go:89] found id: ""
	I0425 20:07:30.410906   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.410917   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:30.410927   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:30.410985   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:30.456150   72712 cri.go:89] found id: ""
	I0425 20:07:30.456173   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.456181   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:30.456186   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:30.456234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:30.495409   72712 cri.go:89] found id: ""
	I0425 20:07:30.495439   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.495450   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:30.495458   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:30.495516   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:30.535863   72712 cri.go:89] found id: ""
	I0425 20:07:30.535895   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.535906   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:30.535912   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:30.535971   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:30.573772   72712 cri.go:89] found id: ""
	I0425 20:07:30.573808   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.573819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:30.573826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:30.573892   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:30.626310   72712 cri.go:89] found id: ""
	I0425 20:07:30.626350   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.626362   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:30.626376   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:30.626438   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:30.666302   72712 cri.go:89] found id: ""
	I0425 20:07:30.666332   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.666343   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:30.666350   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:30.666413   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:30.703478   72712 cri.go:89] found id: ""
	I0425 20:07:30.703507   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.703519   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:30.703529   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:30.703543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:30.756532   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:30.756566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:30.772128   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:30.772158   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:30.853701   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:30.853728   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:30.853743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.935879   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:30.935917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:29.372219   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.872125   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:32.865998   72220 pod_ready.go:81] duration metric: took 4m0.000690329s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:32.866038   72220 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0425 20:07:32.866057   72220 pod_ready.go:38] duration metric: took 4m13.047288103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:32.866091   72220 kubeadm.go:591] duration metric: took 4m22.882679222s to restartPrimaryControlPlane
	W0425 20:07:32.866150   72220 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:32.866182   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:32.917319   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:35.421922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.886418   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.894776   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.483702   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:33.498238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:33.498310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:33.545696   72712 cri.go:89] found id: ""
	I0425 20:07:33.545723   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.545731   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:33.545737   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:33.545791   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:33.590808   72712 cri.go:89] found id: ""
	I0425 20:07:33.590837   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.590849   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:33.590857   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:33.590919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:33.634529   72712 cri.go:89] found id: ""
	I0425 20:07:33.634554   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.634562   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:33.634572   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:33.634640   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:33.679055   72712 cri.go:89] found id: ""
	I0425 20:07:33.679082   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.679093   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:33.679100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:33.679160   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:33.720653   72712 cri.go:89] found id: ""
	I0425 20:07:33.720686   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.720698   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:33.720706   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:33.720777   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:33.766163   72712 cri.go:89] found id: ""
	I0425 20:07:33.766221   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.766233   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:33.766241   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:33.766314   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:33.810804   72712 cri.go:89] found id: ""
	I0425 20:07:33.810830   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.810839   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:33.810844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:33.810908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:33.858109   72712 cri.go:89] found id: ""
	I0425 20:07:33.858140   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.858152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:33.858162   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:33.858176   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:33.926296   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:33.926333   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:33.944220   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:33.944249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:34.042119   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:34.042191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:34.042234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:34.143694   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:34.143732   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:36.691575   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:36.710408   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:36.710490   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:36.760097   72712 cri.go:89] found id: ""
	I0425 20:07:36.760135   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.760144   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:36.760150   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:36.760208   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:36.801508   72712 cri.go:89] found id: ""
	I0425 20:07:36.801532   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.801541   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:36.801546   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:36.801602   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:36.842293   72712 cri.go:89] found id: ""
	I0425 20:07:36.842328   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.842340   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:36.842355   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:36.842418   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:36.884101   72712 cri.go:89] found id: ""
	I0425 20:07:36.884131   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.884141   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:36.884149   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:36.884211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:36.925007   72712 cri.go:89] found id: ""
	I0425 20:07:36.925032   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.925039   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:36.925045   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:36.925109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:36.964975   72712 cri.go:89] found id: ""
	I0425 20:07:36.965009   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.965020   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:36.965028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:36.965088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:37.030956   72712 cri.go:89] found id: ""
	I0425 20:07:37.030987   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.030999   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:37.031007   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:37.031080   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:37.105919   72712 cri.go:89] found id: ""
	I0425 20:07:37.105946   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.105956   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:37.105967   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:37.105983   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:37.196376   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:37.196415   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:37.240296   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:37.240334   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:37.304336   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:37.304371   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:37.323146   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:37.323184   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:37.918245   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.418671   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:36.384384   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:38.387656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.883973   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	W0425 20:07:37.414563   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:39.915087   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:39.930987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:39.931068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:39.967641   72712 cri.go:89] found id: ""
	I0425 20:07:39.967682   72712 logs.go:276] 0 containers: []
	W0425 20:07:39.967693   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:39.967698   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:39.967755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:40.009924   72712 cri.go:89] found id: ""
	I0425 20:07:40.009951   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.009959   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:40.009969   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:40.010019   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:40.049644   72712 cri.go:89] found id: ""
	I0425 20:07:40.049675   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.049689   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:40.049697   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:40.049759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:40.090487   72712 cri.go:89] found id: ""
	I0425 20:07:40.090509   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.090519   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:40.090524   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:40.090583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:40.137634   72712 cri.go:89] found id: ""
	I0425 20:07:40.137664   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.137674   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:40.137681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:40.137745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:40.174832   72712 cri.go:89] found id: ""
	I0425 20:07:40.174863   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.174874   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:40.174882   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:40.174947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:40.212559   72712 cri.go:89] found id: ""
	I0425 20:07:40.212585   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.212593   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:40.212598   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:40.212687   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:40.253459   72712 cri.go:89] found id: ""
	I0425 20:07:40.253494   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.253506   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:40.253518   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:40.253533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:40.311253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:40.311288   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:40.326693   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:40.326722   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:40.405792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:40.405816   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:40.405831   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:40.486712   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:40.486749   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:42.419025   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:44.916387   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:41.387375   72304 pod_ready.go:81] duration metric: took 4m0.010411263s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:41.387396   72304 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:07:41.387402   72304 pod_ready.go:38] duration metric: took 4m6.083068398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:41.387414   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:07:41.387441   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:41.387498   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:41.459873   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:41.459899   72304 cri.go:89] found id: ""
	I0425 20:07:41.459907   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:41.459960   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.465470   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:41.465534   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:41.509504   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:41.509523   72304 cri.go:89] found id: ""
	I0425 20:07:41.509530   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:41.509584   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.515012   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:41.515070   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:41.562701   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:41.562727   72304 cri.go:89] found id: ""
	I0425 20:07:41.562737   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:41.562792   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.567856   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:41.567928   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:41.618411   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:41.618441   72304 cri.go:89] found id: ""
	I0425 20:07:41.618452   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:41.618510   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.625757   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:41.625826   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:41.672707   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:41.672734   72304 cri.go:89] found id: ""
	I0425 20:07:41.672741   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:41.672785   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.678040   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:41.678119   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:41.725172   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:41.725196   72304 cri.go:89] found id: ""
	I0425 20:07:41.725205   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:41.725264   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.730651   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:41.730718   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:41.777224   72304 cri.go:89] found id: ""
	I0425 20:07:41.777269   72304 logs.go:276] 0 containers: []
	W0425 20:07:41.777280   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:41.777290   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:41.777380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:41.821498   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:41.821524   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:41.821531   72304 cri.go:89] found id: ""
	I0425 20:07:41.821541   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:41.821599   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.827065   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.831900   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:41.831924   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:41.893198   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:41.893233   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:41.909141   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:41.909169   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:42.051260   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:42.051305   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:42.109173   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:42.109214   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:42.155862   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:42.155894   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:42.222430   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:42.222466   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:42.265323   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:42.265353   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:42.316534   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:42.316569   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:42.363543   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:42.363568   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:42.422389   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:42.422421   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:42.471230   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:42.471259   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.011223   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.011263   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:45.578411   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:45.597748   72304 api_server.go:72] duration metric: took 4m16.066757074s to wait for apiserver process to appear ...
	I0425 20:07:45.597777   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:07:45.597813   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:45.597861   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:45.649452   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:45.649481   72304 cri.go:89] found id: ""
	I0425 20:07:45.649491   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:45.649534   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.654965   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:45.655023   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:45.701151   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:45.701177   72304 cri.go:89] found id: ""
	I0425 20:07:45.701186   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:45.701238   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.706702   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:45.706767   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:45.763142   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:45.763167   72304 cri.go:89] found id: ""
	I0425 20:07:45.763177   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:45.763220   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.768626   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:45.768684   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:45.816615   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:45.816648   72304 cri.go:89] found id: ""
	I0425 20:07:45.816656   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:45.816701   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.822714   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:45.822790   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:45.875652   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:45.875678   72304 cri.go:89] found id: ""
	I0425 20:07:45.875688   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:45.875737   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.881649   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:45.881719   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:45.930631   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:45.930656   72304 cri.go:89] found id: ""
	I0425 20:07:45.930666   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:45.930721   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.939712   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:45.939783   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:45.984646   72304 cri.go:89] found id: ""
	I0425 20:07:45.984684   72304 logs.go:276] 0 containers: []
	W0425 20:07:45.984693   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:45.984699   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:45.984754   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:46.029752   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.029777   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.029782   72304 cri.go:89] found id: ""
	I0425 20:07:46.029789   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:46.029845   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.035189   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.040479   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:46.040503   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:46.101469   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:46.101509   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:46.167362   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:46.167401   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:46.217732   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:46.217759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:46.264372   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:46.264404   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:43.037730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:43.064471   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:43.064550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:43.130075   72712 cri.go:89] found id: ""
	I0425 20:07:43.130111   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.130129   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:43.130136   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:43.130195   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:43.169628   72712 cri.go:89] found id: ""
	I0425 20:07:43.169663   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.169675   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:43.169682   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:43.169748   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:43.214845   72712 cri.go:89] found id: ""
	I0425 20:07:43.214869   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.214877   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:43.214883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:43.214929   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:43.263047   72712 cri.go:89] found id: ""
	I0425 20:07:43.263069   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.263078   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:43.263083   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:43.263142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:43.313179   72712 cri.go:89] found id: ""
	I0425 20:07:43.313213   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.313223   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:43.313231   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:43.313295   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:43.353440   72712 cri.go:89] found id: ""
	I0425 20:07:43.353468   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.353480   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:43.353488   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:43.353546   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:43.392261   72712 cri.go:89] found id: ""
	I0425 20:07:43.392288   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.392296   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:43.392321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:43.392378   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:43.431111   72712 cri.go:89] found id: ""
	I0425 20:07:43.431139   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.431147   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:43.431155   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:43.431165   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:43.485087   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:43.485120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:43.501508   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:43.501536   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:43.586041   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:43.586073   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:43.586089   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.663194   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.663232   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:46.218461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:46.233195   72712 kubeadm.go:591] duration metric: took 4m4.06065248s to restartPrimaryControlPlane
	W0425 20:07:46.233281   72712 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:46.233311   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:48.166680   72712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.933342568s)
	I0425 20:07:48.166771   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:48.185391   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:07:48.198250   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:07:48.209825   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:07:48.209843   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:07:48.209897   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:07:48.220854   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:07:48.220909   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:07:48.231518   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:07:48.241515   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:07:48.241589   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:07:48.251764   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.261762   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:07:48.261813   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.271952   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:07:48.281914   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:07:48.281986   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:07:48.292879   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:07:48.372322   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:07:48.372460   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:07:48.529730   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:07:48.529854   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:07:48.529979   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:07:48.753171   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:07:48.755473   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:07:48.755590   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:07:48.755692   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:07:48.755809   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:07:48.755905   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:07:48.756132   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:07:48.756317   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:07:48.756867   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:07:48.757498   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:07:48.758073   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:07:48.758581   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:07:48.758745   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:07:48.758842   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:07:48.894873   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:07:48.946907   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:07:49.084938   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:07:49.201925   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:07:49.219675   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:07:49.220891   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:07:49.220951   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:07:49.387310   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:07:46.917886   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:48.919793   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:46.324627   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:46.324653   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:46.382068   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:46.382102   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.424672   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:46.424709   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.466659   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:46.466692   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:46.484868   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:46.484898   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:46.614688   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:46.614720   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:46.666805   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:46.666846   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:47.098854   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:47.098899   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:49.653042   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:07:49.657843   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:07:49.659251   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:07:49.659285   72304 api_server.go:131] duration metric: took 4.061499319s to wait for apiserver health ...
	I0425 20:07:49.659295   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:07:49.659321   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:49.659380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:49.709699   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:49.709721   72304 cri.go:89] found id: ""
	I0425 20:07:49.709729   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:49.709795   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.715369   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:49.715429   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:49.773517   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:49.773544   72304 cri.go:89] found id: ""
	I0425 20:07:49.773554   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:49.773617   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.778984   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:49.779071   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:49.825707   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:49.825739   72304 cri.go:89] found id: ""
	I0425 20:07:49.825746   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:49.825790   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.830613   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:49.830678   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:49.872068   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:49.872094   72304 cri.go:89] found id: ""
	I0425 20:07:49.872104   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:49.872166   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.877311   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:49.877383   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:49.930182   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:49.930216   72304 cri.go:89] found id: ""
	I0425 20:07:49.930228   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:49.930283   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.935415   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:49.935484   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:49.985377   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:49.985404   72304 cri.go:89] found id: ""
	I0425 20:07:49.985412   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:49.985469   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.991021   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:49.991092   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:50.037755   72304 cri.go:89] found id: ""
	I0425 20:07:50.037787   72304 logs.go:276] 0 containers: []
	W0425 20:07:50.037802   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:50.037811   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:50.037875   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:50.083706   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.083731   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.083735   72304 cri.go:89] found id: ""
	I0425 20:07:50.083742   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:50.083793   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.088730   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.094339   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:50.094371   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:50.161538   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:50.161573   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.204178   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:50.204211   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.251315   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:50.251344   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:50.315859   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:50.315886   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:50.367787   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:50.367829   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:50.429509   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:50.429541   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:50.488723   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:50.488759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:50.506838   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:50.506879   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:50.629496   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:50.629526   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:50.689286   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:50.689321   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:50.731343   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:50.731373   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:50.772085   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:50.772114   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:49.389887   72712 out.go:204]   - Booting up control plane ...
	I0425 20:07:49.390011   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:07:49.395060   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:07:49.398108   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:07:49.398220   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:07:49.402596   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:07:53.651817   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:07:53.651845   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.651850   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.651854   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.651859   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.651862   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.651865   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.651872   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.651878   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.651885   72304 system_pods.go:74] duration metric: took 3.992584481s to wait for pod list to return data ...
	I0425 20:07:53.651892   72304 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:07:53.654617   72304 default_sa.go:45] found service account: "default"
	I0425 20:07:53.654641   72304 default_sa.go:55] duration metric: took 2.742232ms for default service account to be created ...
	I0425 20:07:53.654649   72304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:07:53.660082   72304 system_pods.go:86] 8 kube-system pods found
	I0425 20:07:53.660110   72304 system_pods.go:89] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.660116   72304 system_pods.go:89] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.660121   72304 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.660127   72304 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.660131   72304 system_pods.go:89] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.660135   72304 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.660142   72304 system_pods.go:89] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.660148   72304 system_pods.go:89] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.660154   72304 system_pods.go:126] duration metric: took 5.50043ms to wait for k8s-apps to be running ...
	I0425 20:07:53.660161   72304 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:07:53.660201   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:53.677461   72304 system_svc.go:56] duration metric: took 17.289854ms WaitForService to wait for kubelet
	I0425 20:07:53.677499   72304 kubeadm.go:576] duration metric: took 4m24.146512306s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:07:53.677524   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:07:53.681527   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:07:53.681562   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:07:53.681576   72304 node_conditions.go:105] duration metric: took 4.045221ms to run NodePressure ...
	I0425 20:07:53.681591   72304 start.go:240] waiting for startup goroutines ...
	I0425 20:07:53.681605   72304 start.go:245] waiting for cluster config update ...
	I0425 20:07:53.681622   72304 start.go:254] writing updated cluster config ...
	I0425 20:07:53.682002   72304 ssh_runner.go:195] Run: rm -f paused
	I0425 20:07:53.732056   72304 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:07:53.734302   72304 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142196" cluster and "default" namespace by default
	I0425 20:07:51.419808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:53.916090   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:55.917139   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:58.417609   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:00.917152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:02.918628   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.419508   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.765908   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.899694836s)
	I0425 20:08:05.765989   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:05.787711   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:08:05.801717   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:08:05.813710   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:08:05.813741   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:08:05.813802   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:08:05.825122   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:08:05.825202   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:08:05.837118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:08:05.848807   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:08:05.848880   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:08:05.862028   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.873795   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:08:05.873919   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.885577   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:08:05.897605   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:08:05.897685   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:08:05.909284   72220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:08:05.965574   72220 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 20:08:05.965663   72220 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:08:06.133359   72220 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:08:06.133525   72220 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:08:06.133675   72220 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:08:06.391437   72220 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:08:06.393805   72220 out.go:204]   - Generating certificates and keys ...
	I0425 20:08:06.393905   72220 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:08:06.393994   72220 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:08:06.394121   72220 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:08:06.394237   72220 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:08:06.394332   72220 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:08:06.394417   72220 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:08:06.394514   72220 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:08:06.396093   72220 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:08:06.396202   72220 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:08:06.396300   72220 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:08:06.396358   72220 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:08:06.396423   72220 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:08:06.683452   72220 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:08:06.778456   72220 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 20:08:06.923709   72220 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:08:07.079685   72220 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:08:07.170533   72220 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:08:07.171070   72220 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:08:07.173798   72220 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:08:07.175699   72220 out.go:204]   - Booting up control plane ...
	I0425 20:08:07.175824   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:08:07.175924   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:08:07.176060   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:08:07.197685   72220 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:08:07.200579   72220 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:08:07.200645   72220 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:08:07.354665   72220 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 20:08:07.354779   72220 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 20:08:07.855900   72220 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.56346ms
	I0425 20:08:07.856015   72220 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 20:08:07.423114   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:09.425115   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:13.358654   72220 kubeadm.go:309] [api-check] The API server is healthy after 5.502458238s
	I0425 20:08:13.388381   72220 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 20:08:13.908867   72220 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 20:08:13.945417   72220 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 20:08:13.945708   72220 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-744552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 20:08:13.959901   72220 kubeadm.go:309] [bootstrap-token] Using token: r2mxoe.iuelddsr8gvoq1wo
	I0425 20:08:13.961409   72220 out.go:204]   - Configuring RBAC rules ...
	I0425 20:08:13.961552   72220 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 20:08:13.970435   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 20:08:13.978933   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 20:08:13.982503   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 20:08:13.987029   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 20:08:13.990969   72220 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 20:08:14.103051   72220 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 20:08:14.554715   72220 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 20:08:15.105951   72220 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 20:08:15.107134   72220 kubeadm.go:309] 
	I0425 20:08:15.107222   72220 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 20:08:15.107236   72220 kubeadm.go:309] 
	I0425 20:08:15.107336   72220 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 20:08:15.107349   72220 kubeadm.go:309] 
	I0425 20:08:15.107379   72220 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 20:08:15.107463   72220 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 20:08:15.107550   72220 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 20:08:15.107560   72220 kubeadm.go:309] 
	I0425 20:08:15.107657   72220 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 20:08:15.107668   72220 kubeadm.go:309] 
	I0425 20:08:15.107735   72220 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 20:08:15.107747   72220 kubeadm.go:309] 
	I0425 20:08:15.107807   72220 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 20:08:15.107935   72220 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 20:08:15.108030   72220 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 20:08:15.108042   72220 kubeadm.go:309] 
	I0425 20:08:15.108154   72220 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 20:08:15.108269   72220 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 20:08:15.108280   72220 kubeadm.go:309] 
	I0425 20:08:15.108395   72220 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.108556   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed \
	I0425 20:08:15.108594   72220 kubeadm.go:309] 	--control-plane 
	I0425 20:08:15.108603   72220 kubeadm.go:309] 
	I0425 20:08:15.108719   72220 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 20:08:15.108730   72220 kubeadm.go:309] 
	I0425 20:08:15.108849   72220 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.109004   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed 
	I0425 20:08:15.109717   72220 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:08:15.109778   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:08:15.109797   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:08:15.111712   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:08:11.918414   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:14.420753   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:15.113288   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:08:15.129693   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:08:15.157631   72220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:08:15.157709   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.157760   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-744552 minikube.k8s.io/updated_at=2024_04_25T20_08_15_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=no-preload-744552 minikube.k8s.io/primary=true
	I0425 20:08:15.374198   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.418592   72220 ops.go:34] apiserver oom_adj: -16
	I0425 20:08:15.874721   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.374969   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.875091   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.375038   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.874685   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:18.374802   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.917617   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:19.421721   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:18.874931   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.374961   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.874349   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.374787   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.875130   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.374959   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.874325   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.374798   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.875034   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:23.374899   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.917898   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:22.917132   71966 pod_ready.go:81] duration metric: took 4m0.007062693s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:08:22.917156   71966 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:08:22.917164   71966 pod_ready.go:38] duration metric: took 4m4.548150095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:22.917179   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:22.917211   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:22.917270   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:22.982604   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:22.982631   71966 cri.go:89] found id: ""
	I0425 20:08:22.982640   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:22.982698   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:22.988558   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:22.988618   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:23.031937   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.031964   71966 cri.go:89] found id: ""
	I0425 20:08:23.031973   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:23.032031   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.037315   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:23.037371   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:23.089839   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.089862   71966 cri.go:89] found id: ""
	I0425 20:08:23.089872   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:23.089936   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.095247   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:23.095309   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:23.136257   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.136286   71966 cri.go:89] found id: ""
	I0425 20:08:23.136294   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:23.136357   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.142548   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:23.142608   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:23.186190   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.186229   71966 cri.go:89] found id: ""
	I0425 20:08:23.186239   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:23.186301   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.191422   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:23.191494   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:23.242326   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.242361   71966 cri.go:89] found id: ""
	I0425 20:08:23.242371   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:23.242437   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.248578   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:23.248642   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:23.286781   71966 cri.go:89] found id: ""
	I0425 20:08:23.286807   71966 logs.go:276] 0 containers: []
	W0425 20:08:23.286817   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:23.286823   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:23.286885   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:23.334728   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:23.334754   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.334761   71966 cri.go:89] found id: ""
	I0425 20:08:23.334770   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:23.334831   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.340288   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.344787   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:23.344808   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:23.401830   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:23.401865   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:23.425683   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:23.425715   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:23.568527   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:23.568558   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.608747   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:23.608776   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.647962   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:23.647996   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.687270   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:23.687308   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:23.745081   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:23.745112   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:23.799375   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:23.799405   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.853199   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:23.853232   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.896535   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:23.896571   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.964317   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:23.964350   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:24.013196   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:24.013231   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:23.874275   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.374250   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.874396   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.374767   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.874968   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.374333   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.874916   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.374369   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.499044   72220 kubeadm.go:1107] duration metric: took 12.341393953s to wait for elevateKubeSystemPrivileges
	W0425 20:08:27.499078   72220 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 20:08:27.499087   72220 kubeadm.go:393] duration metric: took 5m17.572541498s to StartCluster
	I0425 20:08:27.499108   72220 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.499189   72220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:08:27.500940   72220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.501192   72220 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:08:27.503257   72220 out.go:177] * Verifying Kubernetes components...
	I0425 20:08:27.501308   72220 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:08:27.501405   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:08:27.505389   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:08:27.505403   72220 addons.go:69] Setting storage-provisioner=true in profile "no-preload-744552"
	I0425 20:08:27.505438   72220 addons.go:234] Setting addon storage-provisioner=true in "no-preload-744552"
	W0425 20:08:27.505453   72220 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:08:27.505490   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505505   72220 addons.go:69] Setting metrics-server=true in profile "no-preload-744552"
	I0425 20:08:27.505535   72220 addons.go:234] Setting addon metrics-server=true in "no-preload-744552"
	W0425 20:08:27.505546   72220 addons.go:243] addon metrics-server should already be in state true
	I0425 20:08:27.505574   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505895   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.505922   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.505492   72220 addons.go:69] Setting default-storageclass=true in profile "no-preload-744552"
	I0425 20:08:27.505990   72220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-744552"
	I0425 20:08:27.505952   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506099   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.506418   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506467   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.523666   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0425 20:08:27.526950   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0425 20:08:27.526972   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.526981   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I0425 20:08:27.527536   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527606   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527662   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.527683   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528039   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528059   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528122   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528228   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528242   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528601   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528644   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528712   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.528735   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.528800   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.529228   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.529246   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.532151   72220 addons.go:234] Setting addon default-storageclass=true in "no-preload-744552"
	W0425 20:08:27.532171   72220 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:08:27.532204   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.532543   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.532582   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.547165   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0425 20:08:27.547700   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.548354   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.548368   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.548675   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.548793   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.550640   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.554301   72220 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:08:27.553061   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0425 20:08:27.553099   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0425 20:08:27.555613   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:08:27.555630   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:08:27.555652   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.556177   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556181   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556724   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556739   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.556868   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556879   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.557128   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.557700   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.557729   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.558142   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.558406   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.559420   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.559990   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.560057   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.560076   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.560177   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.560333   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.560549   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.560967   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.562839   72220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:08:27.564442   72220 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.564480   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:08:27.564517   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.567912   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.568153   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.568171   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.570321   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.570514   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.570709   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.570945   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.578396   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
	I0425 20:08:27.586629   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.587070   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.587082   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.587584   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.587736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.589708   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.589937   72220 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.589948   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:08:27.589961   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.592640   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.592983   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.593007   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.593261   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.593541   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.593736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.593906   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.783858   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:08:27.820917   72220 node_ready.go:35] waiting up to 6m0s for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832349   72220 node_ready.go:49] node "no-preload-744552" has status "Ready":"True"
	I0425 20:08:27.832377   72220 node_ready.go:38] duration metric: took 11.423909ms for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832390   72220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:27.844475   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:27.886461   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:08:27.886483   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:08:27.899413   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.931511   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.935073   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:08:27.935098   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:08:27.989052   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:27.989082   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:08:28.016326   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:28.551863   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551894   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.551964   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551976   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552255   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552280   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552292   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552315   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552358   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.552397   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552405   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552414   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552421   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552571   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552597   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552710   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552736   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.578416   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.578445   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.578730   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.578776   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.578789   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.945831   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.945861   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946170   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946191   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946214   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.946224   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946531   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946549   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946560   72220 addons.go:470] Verifying addon metrics-server=true in "no-preload-744552"
	I0425 20:08:28.946570   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.948485   72220 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:08:27.005360   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:27.024856   71966 api_server.go:72] duration metric: took 4m14.401244231s to wait for apiserver process to appear ...
	I0425 20:08:27.024881   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:27.024922   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:27.024982   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:27.072098   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:27.072129   71966 cri.go:89] found id: ""
	I0425 20:08:27.072140   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:27.072210   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.077726   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:27.077793   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:27.118834   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:27.118855   71966 cri.go:89] found id: ""
	I0425 20:08:27.118864   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:27.118917   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.125277   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:27.125347   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:27.167036   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.167064   71966 cri.go:89] found id: ""
	I0425 20:08:27.167074   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:27.167131   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.172390   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:27.172468   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:27.212933   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:27.212957   71966 cri.go:89] found id: ""
	I0425 20:08:27.212967   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:27.213022   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.218033   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:27.218083   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:27.259294   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:27.259321   71966 cri.go:89] found id: ""
	I0425 20:08:27.259331   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:27.259384   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.265537   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:27.265610   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:27.312145   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:27.312174   71966 cri.go:89] found id: ""
	I0425 20:08:27.312183   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:27.312240   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.318346   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:27.318405   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:27.362467   71966 cri.go:89] found id: ""
	I0425 20:08:27.362495   71966 logs.go:276] 0 containers: []
	W0425 20:08:27.362504   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:27.362509   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:27.362569   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:27.406810   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:27.406834   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.406839   71966 cri.go:89] found id: ""
	I0425 20:08:27.406846   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:27.406903   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.412431   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.421695   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:27.421725   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.472832   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:27.472863   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.535799   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:27.535830   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:28.004964   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:28.005006   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:28.072378   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:28.072417   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:28.236479   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:28.236523   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:28.296095   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:28.296133   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:28.351290   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:28.351314   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:28.400529   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:28.400567   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:28.459149   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:28.459178   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:28.507818   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:28.507844   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:28.565596   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:28.565627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:28.588509   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:28.588535   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:29.403321   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:08:29.403717   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:29.404001   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:28.950127   72220 addons.go:505] duration metric: took 1.448816058s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:08:29.862142   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:30.851653   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.851677   72220 pod_ready.go:81] duration metric: took 3.007171918s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.851689   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857090   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.857108   72220 pod_ready.go:81] duration metric: took 5.412841ms for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857117   72220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863315   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.863331   72220 pod_ready.go:81] duration metric: took 6.207835ms for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863339   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867557   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.867579   72220 pod_ready.go:81] duration metric: took 4.23311ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867590   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872391   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.872407   72220 pod_ready.go:81] duration metric: took 4.810397ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872415   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249226   72220 pod_ready.go:92] pod "kube-proxy-22w7x" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.249259   72220 pod_ready.go:81] duration metric: took 376.837327ms for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249284   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649908   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.649934   72220 pod_ready.go:81] duration metric: took 400.641991ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649945   72220 pod_ready.go:38] duration metric: took 3.817541056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:31.649962   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:31.650025   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:31.684094   72220 api_server.go:72] duration metric: took 4.182865357s to wait for apiserver process to appear ...
	I0425 20:08:31.684123   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:31.684146   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:08:31.689688   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:08:31.690939   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.690963   72220 api_server.go:131] duration metric: took 6.831773ms to wait for apiserver health ...
	I0425 20:08:31.690973   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.853816   72220 system_pods.go:59] 9 kube-system pods found
	I0425 20:08:31.853849   72220 system_pods.go:61] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:31.853856   72220 system_pods.go:61] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:31.853861   72220 system_pods.go:61] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:31.853868   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:31.853872   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:31.853877   72220 system_pods.go:61] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:31.853881   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:31.853889   72220 system_pods.go:61] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:31.853894   72220 system_pods.go:61] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:31.853907   72220 system_pods.go:74] duration metric: took 162.928561ms to wait for pod list to return data ...
	I0425 20:08:31.853916   72220 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:32.049906   72220 default_sa.go:45] found service account: "default"
	I0425 20:08:32.049932   72220 default_sa.go:55] duration metric: took 196.003422ms for default service account to be created ...
	I0425 20:08:32.049942   72220 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:32.255245   72220 system_pods.go:86] 9 kube-system pods found
	I0425 20:08:32.255290   72220 system_pods.go:89] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:32.255298   72220 system_pods.go:89] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:32.255304   72220 system_pods.go:89] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:32.255311   72220 system_pods.go:89] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:32.255317   72220 system_pods.go:89] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:32.255322   72220 system_pods.go:89] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:32.255328   72220 system_pods.go:89] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:32.255338   72220 system_pods.go:89] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:32.255348   72220 system_pods.go:89] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:32.255368   72220 system_pods.go:126] duration metric: took 205.41905ms to wait for k8s-apps to be running ...
	I0425 20:08:32.255378   72220 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:32.255429   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:32.274141   72220 system_svc.go:56] duration metric: took 18.75721ms WaitForService to wait for kubelet
	I0425 20:08:32.274173   72220 kubeadm.go:576] duration metric: took 4.77294686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:32.274198   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:32.449699   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:32.449727   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:32.449741   72220 node_conditions.go:105] duration metric: took 175.536406ms to run NodePressure ...
	I0425 20:08:32.449755   72220 start.go:240] waiting for startup goroutines ...
	I0425 20:08:32.449765   72220 start.go:245] waiting for cluster config update ...
	I0425 20:08:32.449778   72220 start.go:254] writing updated cluster config ...
	I0425 20:08:32.450108   72220 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:32.503317   72220 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:32.505391   72220 out.go:177] * Done! kubectl is now configured to use "no-preload-744552" cluster and "default" namespace by default
	I0425 20:08:31.153636   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:08:31.158526   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:08:31.159775   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.159817   71966 api_server.go:131] duration metric: took 4.134911832s to wait for apiserver health ...
	I0425 20:08:31.159827   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.159847   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:31.159890   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:31.201597   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:31.201616   71966 cri.go:89] found id: ""
	I0425 20:08:31.201625   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:31.201667   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.206973   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:31.207039   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:31.248400   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:31.248424   71966 cri.go:89] found id: ""
	I0425 20:08:31.248435   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:31.248496   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.253822   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:31.253879   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:31.298921   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:31.298946   71966 cri.go:89] found id: ""
	I0425 20:08:31.298956   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:31.299003   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.304691   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:31.304758   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:31.351773   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:31.351796   71966 cri.go:89] found id: ""
	I0425 20:08:31.351804   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:31.351851   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.356599   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:31.356651   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:31.399655   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:31.399678   71966 cri.go:89] found id: ""
	I0425 20:08:31.399686   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:31.399740   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.405103   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:31.405154   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:31.452763   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:31.452785   71966 cri.go:89] found id: ""
	I0425 20:08:31.452794   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:31.452840   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.457788   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:31.457838   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:31.503746   71966 cri.go:89] found id: ""
	I0425 20:08:31.503780   71966 logs.go:276] 0 containers: []
	W0425 20:08:31.503791   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:31.503798   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:31.503868   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:31.548517   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:31.548543   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:31.548555   71966 cri.go:89] found id: ""
	I0425 20:08:31.548565   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:31.548631   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.553673   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.558271   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:31.558290   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:31.974349   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:31.974387   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:32.033292   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:32.033327   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:32.050762   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:32.050791   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:32.101591   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:32.101627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:32.142626   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:32.142652   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:32.203270   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:32.203315   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:32.247021   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:32.247048   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:32.294900   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:32.294936   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:32.353902   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:32.353934   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:32.488543   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:32.488584   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:32.569303   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:32.569358   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:32.622767   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:32.622802   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:35.181779   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:08:35.181813   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.181820   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.181826   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.181832   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.181837   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.181843   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.181851   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.181858   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.181867   71966 system_pods.go:74] duration metric: took 4.022033823s to wait for pod list to return data ...
	I0425 20:08:35.181879   71966 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:35.185387   71966 default_sa.go:45] found service account: "default"
	I0425 20:08:35.185413   71966 default_sa.go:55] duration metric: took 3.523751ms for default service account to be created ...
	I0425 20:08:35.185423   71966 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:35.195075   71966 system_pods.go:86] 8 kube-system pods found
	I0425 20:08:35.195099   71966 system_pods.go:89] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.195104   71966 system_pods.go:89] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.195109   71966 system_pods.go:89] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.195114   71966 system_pods.go:89] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.195118   71966 system_pods.go:89] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.195122   71966 system_pods.go:89] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.195128   71966 system_pods.go:89] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.195133   71966 system_pods.go:89] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.195139   71966 system_pods.go:126] duration metric: took 9.711803ms to wait for k8s-apps to be running ...
	I0425 20:08:35.195155   71966 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:35.195195   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:35.213494   71966 system_svc.go:56] duration metric: took 18.331225ms WaitForService to wait for kubelet
	I0425 20:08:35.213523   71966 kubeadm.go:576] duration metric: took 4m22.589912913s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:35.213545   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:35.216461   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:35.216481   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:35.216493   71966 node_conditions.go:105] duration metric: took 2.94061ms to run NodePressure ...
	I0425 20:08:35.216502   71966 start.go:240] waiting for startup goroutines ...
	I0425 20:08:35.216509   71966 start.go:245] waiting for cluster config update ...
	I0425 20:08:35.216518   71966 start.go:254] writing updated cluster config ...
	I0425 20:08:35.216750   71966 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:35.265836   71966 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:35.269026   71966 out.go:177] * Done! kubectl is now configured to use "embed-certs-512173" cluster and "default" namespace by default
	I0425 20:08:34.404410   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:34.404662   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:44.405293   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:44.405518   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:04.406406   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:04.406676   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.407969   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:44.408240   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.408259   72712 kubeadm.go:309] 
	I0425 20:09:44.408293   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:09:44.408355   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:09:44.408373   72712 kubeadm.go:309] 
	I0425 20:09:44.408417   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:09:44.408448   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:09:44.408562   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:09:44.408575   72712 kubeadm.go:309] 
	I0425 20:09:44.408655   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:09:44.408684   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:09:44.408711   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:09:44.408718   72712 kubeadm.go:309] 
	I0425 20:09:44.408812   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:09:44.408912   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:09:44.408939   72712 kubeadm.go:309] 
	I0425 20:09:44.409085   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:09:44.409217   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:09:44.409341   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:09:44.409418   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:09:44.409433   72712 kubeadm.go:309] 
	I0425 20:09:44.410319   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:09:44.410423   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:09:44.410510   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0425 20:09:44.410640   72712 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0425 20:09:44.410700   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:09:45.395830   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:09:45.412628   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:09:45.423387   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:09:45.423412   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:09:45.423465   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:09:45.434317   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:09:45.434389   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:09:45.445657   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:09:45.455698   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:09:45.455772   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:09:45.466137   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.476140   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:09:45.476192   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.486410   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:09:45.495465   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:09:45.495522   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:09:45.505410   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:09:45.726416   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:11:42.214574   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:11:42.214715   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0425 20:11:42.216323   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:11:42.216393   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:11:42.216507   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:11:42.216650   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:11:42.216795   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:11:42.216882   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:11:42.218766   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:11:42.218847   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:11:42.218923   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:11:42.219042   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:11:42.219103   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:11:42.219167   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:11:42.219237   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:11:42.219321   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:11:42.219407   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:11:42.219519   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:11:42.219639   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:11:42.219694   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:11:42.219742   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:11:42.219786   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:11:42.219831   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:11:42.219883   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:11:42.219929   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:11:42.220029   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:11:42.220139   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:11:42.220204   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:11:42.220308   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:11:42.222891   72712 out.go:204]   - Booting up control plane ...
	I0425 20:11:42.222979   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:11:42.223054   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:11:42.223129   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:11:42.223222   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:11:42.223404   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:11:42.223459   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:11:42.223565   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.223835   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.223937   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224165   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224243   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224457   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224541   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224799   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224902   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.225125   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.225134   72712 kubeadm.go:309] 
	I0425 20:11:42.225166   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:11:42.225204   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:11:42.225210   72712 kubeadm.go:309] 
	I0425 20:11:42.225239   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:11:42.225267   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:11:42.225352   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:11:42.225358   72712 kubeadm.go:309] 
	I0425 20:11:42.225446   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:11:42.225476   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:11:42.225522   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:11:42.225533   72712 kubeadm.go:309] 
	I0425 20:11:42.225626   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:11:42.225714   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:11:42.225729   72712 kubeadm.go:309] 
	I0425 20:11:42.225875   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:11:42.225951   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:11:42.226022   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:11:42.226096   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:11:42.226129   72712 kubeadm.go:309] 
	I0425 20:11:42.226162   72712 kubeadm.go:393] duration metric: took 8m0.122692927s to StartCluster
	I0425 20:11:42.226242   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:11:42.226299   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:11:42.283295   72712 cri.go:89] found id: ""
	I0425 20:11:42.283320   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.283329   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:11:42.283335   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:11:42.283389   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:11:42.322462   72712 cri.go:89] found id: ""
	I0425 20:11:42.322493   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.322505   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:11:42.322512   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:11:42.322574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:11:42.372329   72712 cri.go:89] found id: ""
	I0425 20:11:42.372355   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.372363   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:11:42.372369   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:11:42.372416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:11:42.420348   72712 cri.go:89] found id: ""
	I0425 20:11:42.420374   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.420382   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:11:42.420389   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:11:42.420447   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:11:42.460274   72712 cri.go:89] found id: ""
	I0425 20:11:42.460317   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.460329   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:11:42.460337   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:11:42.460395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:11:42.503828   72712 cri.go:89] found id: ""
	I0425 20:11:42.503855   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.503867   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:11:42.503874   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:11:42.503933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:11:42.545045   72712 cri.go:89] found id: ""
	I0425 20:11:42.545070   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.545086   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:11:42.545095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:11:42.545156   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:11:42.586389   72712 cri.go:89] found id: ""
	I0425 20:11:42.586413   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.586421   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:11:42.586429   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:11:42.586440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:11:42.602835   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:11:42.602863   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:11:42.695131   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:11:42.695153   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:11:42.695168   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:11:42.819889   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:11:42.819922   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:11:42.869446   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:11:42.869474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0425 20:11:42.927184   72712 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0425 20:11:42.927236   72712 out.go:239] * 
	W0425 20:11:42.927291   72712 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.927311   72712 out.go:239] * 
	W0425 20:11:42.928275   72712 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 20:11:42.931353   72712 out.go:177] 
	W0425 20:11:42.932654   72712 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.932696   72712 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0425 20:11:42.932713   72712 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0425 20:11:42.934227   72712 out.go:177] 
	
	
	==> CRI-O <==
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.782684637Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076257782660980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf157e5b-afaf-45cf-956b-bb81b44a8bfb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.783256791Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4b67052-0c70-4c9a-aeb5-1cde31e79cc7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.783305409Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4b67052-0c70-4c9a-aeb5-1cde31e79cc7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.783543331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075479620869366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c451d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67ba6a4ea4e35e2b43f358a16c582b03342b863fa0cb48159052b28cb979308,PodSandboxId:135332d33750e30e406c5f99481716254aaf1e04169c75aa4f9559c6d6f27dcd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075459338479741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09c7377f-44eb-4764-97e2-b21add69ffaf,},Annotations:map[string]string{io.kubernetes.container.hash: 46eec6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0,PodSandboxId:514fb8d1dca62bb204cf622d1239158567f838553285306b019e800412cb59b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075456566393378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xsptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b974e5-9b6e-4647-81cc-4fd8aa94077c,},Annotations:map[string]string{io.kubernetes.container.hash: d5a36c9f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149,PodSandboxId:b1ddcd0c049a993aae5bdf0fbbad3dca6a34653633cb29359f94a3ade5f4b962,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075448826715075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8247p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc053d9-814c-4882-b
d11-5111e5a72635,},Annotations:map[string]string{io.kubernetes.container.hash: b4aae625,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075448820410597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c4
51d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4,PodSandboxId:d910c794e803aa51440b28e285bb1585be2f856c2ea6b3d884bd90b96287e06c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075444158751433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3836a19decee787d7cd4e27481d1676,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5,PodSandboxId:ca08ac66072f9a1e15f19674769d4b4ff7503f1c89fb800634c6bc7ec3a012af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075444088482866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c772bbb62054949d2fd93d6437431eb8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 33e1ff1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650,PodSandboxId:f20399c1b1127cc7a57a58e92e51e5fd2e3e8043e242562a57d81c3c9ca6594e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075444124813130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddeaa81ec9a2358ea082dc210cd7af0d,},Annotations:map[string]string{io.kubernetes.container.hash:
f161a577,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86,PodSandboxId:fda25866d81792a46d7118f7e7f6b3879e4e201ef7e13b4cece366dafffb67f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075444073895580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4947ab8541c12a4889282bf39fe1af10,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4b67052-0c70-4c9a-aeb5-1cde31e79cc7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.832119623Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f1cce01-3d96-4de1-84ef-fceb7b1f566c name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.832233676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f1cce01-3d96-4de1-84ef-fceb7b1f566c name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.833636995Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4ec4001-87ed-4efb-b33c-59655a8099f1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.834357513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076257834328470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4ec4001-87ed-4efb-b33c-59655a8099f1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.835124696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50824ab7-0833-400c-bc64-8c01cbcb5395 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.835180457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50824ab7-0833-400c-bc64-8c01cbcb5395 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.835383667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075479620869366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c451d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67ba6a4ea4e35e2b43f358a16c582b03342b863fa0cb48159052b28cb979308,PodSandboxId:135332d33750e30e406c5f99481716254aaf1e04169c75aa4f9559c6d6f27dcd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075459338479741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09c7377f-44eb-4764-97e2-b21add69ffaf,},Annotations:map[string]string{io.kubernetes.container.hash: 46eec6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0,PodSandboxId:514fb8d1dca62bb204cf622d1239158567f838553285306b019e800412cb59b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075456566393378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xsptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b974e5-9b6e-4647-81cc-4fd8aa94077c,},Annotations:map[string]string{io.kubernetes.container.hash: d5a36c9f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149,PodSandboxId:b1ddcd0c049a993aae5bdf0fbbad3dca6a34653633cb29359f94a3ade5f4b962,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075448826715075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8247p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc053d9-814c-4882-b
d11-5111e5a72635,},Annotations:map[string]string{io.kubernetes.container.hash: b4aae625,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075448820410597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c4
51d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4,PodSandboxId:d910c794e803aa51440b28e285bb1585be2f856c2ea6b3d884bd90b96287e06c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075444158751433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3836a19decee787d7cd4e27481d1676,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5,PodSandboxId:ca08ac66072f9a1e15f19674769d4b4ff7503f1c89fb800634c6bc7ec3a012af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075444088482866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c772bbb62054949d2fd93d6437431eb8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 33e1ff1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650,PodSandboxId:f20399c1b1127cc7a57a58e92e51e5fd2e3e8043e242562a57d81c3c9ca6594e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075444124813130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddeaa81ec9a2358ea082dc210cd7af0d,},Annotations:map[string]string{io.kubernetes.container.hash:
f161a577,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86,PodSandboxId:fda25866d81792a46d7118f7e7f6b3879e4e201ef7e13b4cece366dafffb67f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075444073895580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4947ab8541c12a4889282bf39fe1af10,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50824ab7-0833-400c-bc64-8c01cbcb5395 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.888042313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd39875c-e6b5-40a6-b053-11e7f7ab7304 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.888127833Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd39875c-e6b5-40a6-b053-11e7f7ab7304 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.891571812Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cfe5f60-8077-4e87-aa25-2e7ac424f916 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.892146152Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076257892114904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cfe5f60-8077-4e87-aa25-2e7ac424f916 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.893197626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5075e783-fcb0-4a85-986e-f39d12dccac8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.893342230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5075e783-fcb0-4a85-986e-f39d12dccac8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.893572764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075479620869366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c451d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67ba6a4ea4e35e2b43f358a16c582b03342b863fa0cb48159052b28cb979308,PodSandboxId:135332d33750e30e406c5f99481716254aaf1e04169c75aa4f9559c6d6f27dcd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075459338479741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09c7377f-44eb-4764-97e2-b21add69ffaf,},Annotations:map[string]string{io.kubernetes.container.hash: 46eec6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0,PodSandboxId:514fb8d1dca62bb204cf622d1239158567f838553285306b019e800412cb59b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075456566393378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xsptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b974e5-9b6e-4647-81cc-4fd8aa94077c,},Annotations:map[string]string{io.kubernetes.container.hash: d5a36c9f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149,PodSandboxId:b1ddcd0c049a993aae5bdf0fbbad3dca6a34653633cb29359f94a3ade5f4b962,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075448826715075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8247p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc053d9-814c-4882-b
d11-5111e5a72635,},Annotations:map[string]string{io.kubernetes.container.hash: b4aae625,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075448820410597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c4
51d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4,PodSandboxId:d910c794e803aa51440b28e285bb1585be2f856c2ea6b3d884bd90b96287e06c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075444158751433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3836a19decee787d7cd4e27481d1676,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5,PodSandboxId:ca08ac66072f9a1e15f19674769d4b4ff7503f1c89fb800634c6bc7ec3a012af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075444088482866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c772bbb62054949d2fd93d6437431eb8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 33e1ff1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650,PodSandboxId:f20399c1b1127cc7a57a58e92e51e5fd2e3e8043e242562a57d81c3c9ca6594e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075444124813130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddeaa81ec9a2358ea082dc210cd7af0d,},Annotations:map[string]string{io.kubernetes.container.hash:
f161a577,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86,PodSandboxId:fda25866d81792a46d7118f7e7f6b3879e4e201ef7e13b4cece366dafffb67f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075444073895580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4947ab8541c12a4889282bf39fe1af10,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5075e783-fcb0-4a85-986e-f39d12dccac8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.936517468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd859b7a-e9e1-4af7-af09-8348ad2abb32 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.936641231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd859b7a-e9e1-4af7-af09-8348ad2abb32 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.938571043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=768140e1-36f0-4fcb-a97d-dd5c4e46fe1f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.939052082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076257939025247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=768140e1-36f0-4fcb-a97d-dd5c4e46fe1f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.939697575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ef0c6bf-6885-4ce2-979f-43c8b9ac9cc9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.939751074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ef0c6bf-6885-4ce2-979f-43c8b9ac9cc9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:17:37 embed-certs-512173 crio[732]: time="2024-04-25 20:17:37.940033627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075479620869366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c451d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67ba6a4ea4e35e2b43f358a16c582b03342b863fa0cb48159052b28cb979308,PodSandboxId:135332d33750e30e406c5f99481716254aaf1e04169c75aa4f9559c6d6f27dcd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075459338479741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09c7377f-44eb-4764-97e2-b21add69ffaf,},Annotations:map[string]string{io.kubernetes.container.hash: 46eec6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0,PodSandboxId:514fb8d1dca62bb204cf622d1239158567f838553285306b019e800412cb59b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075456566393378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xsptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b974e5-9b6e-4647-81cc-4fd8aa94077c,},Annotations:map[string]string{io.kubernetes.container.hash: d5a36c9f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149,PodSandboxId:b1ddcd0c049a993aae5bdf0fbbad3dca6a34653633cb29359f94a3ade5f4b962,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075448826715075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8247p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc053d9-814c-4882-b
d11-5111e5a72635,},Annotations:map[string]string{io.kubernetes.container.hash: b4aae625,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075448820410597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c4
51d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4,PodSandboxId:d910c794e803aa51440b28e285bb1585be2f856c2ea6b3d884bd90b96287e06c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075444158751433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3836a19decee787d7cd4e27481d1676,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5,PodSandboxId:ca08ac66072f9a1e15f19674769d4b4ff7503f1c89fb800634c6bc7ec3a012af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075444088482866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c772bbb62054949d2fd93d6437431eb8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 33e1ff1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650,PodSandboxId:f20399c1b1127cc7a57a58e92e51e5fd2e3e8043e242562a57d81c3c9ca6594e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075444124813130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddeaa81ec9a2358ea082dc210cd7af0d,},Annotations:map[string]string{io.kubernetes.container.hash:
f161a577,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86,PodSandboxId:fda25866d81792a46d7118f7e7f6b3879e4e201ef7e13b4cece366dafffb67f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075444073895580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4947ab8541c12a4889282bf39fe1af10,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ef0c6bf-6885-4ce2-979f-43c8b9ac9cc9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf330fbdb7c0d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   1fd7b8630b1b2       storage-provisioner
	e67ba6a4ea4e3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   135332d33750e       busybox
	8acd5626916a2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   514fb8d1dca62       coredns-7db6d8ff4d-xsptj
	1c3e9dc1ffc5f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago      Running             kube-proxy                1                   b1ddcd0c049a9       kube-proxy-8247p
	84313d4e49ed1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   1fd7b8630b1b2       storage-provisioner
	3bae27a3c70b5       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      13 minutes ago      Running             kube-scheduler            1                   d910c794e803a       kube-scheduler-embed-certs-512173
	26f6a9b78dc23       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   f20399c1b1127       etcd-embed-certs-512173
	911aab4d436ac       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      13 minutes ago      Running             kube-apiserver            1                   ca08ac66072f9       kube-apiserver-embed-certs-512173
	df45510448ab3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      13 minutes ago      Running             kube-controller-manager   1                   fda25866d8179       kube-controller-manager-embed-certs-512173
	
	
	==> coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36306 - 2835 "HINFO IN 6010454245023336192.5364635277441556275. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015831528s
	
	
	==> describe nodes <==
	Name:               embed-certs-512173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-512173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=embed-certs-512173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T19_54_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:54:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-512173
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 20:17:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 20:14:50 +0000   Thu, 25 Apr 2024 19:54:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 20:14:50 +0000   Thu, 25 Apr 2024 19:54:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 20:14:50 +0000   Thu, 25 Apr 2024 19:54:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 20:14:50 +0000   Thu, 25 Apr 2024 20:04:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.7
	  Hostname:    embed-certs-512173
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a5b1d22c0c3443eb3283716fbcc51d0
	  System UUID:                1a5b1d22-c0c3-443e-b328-3716fbcc51d0
	  Boot ID:                    76e3f5ae-a8e6-4c4b-9e2a-5797bfe9b570
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-xsptj                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-512173                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-512173             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-512173    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-8247p                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-512173             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-mlkqr               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node embed-certs-512173 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node embed-certs-512173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node embed-certs-512173 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-512173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-512173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-512173 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node embed-certs-512173 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-512173 event: Registered Node embed-certs-512173 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-512173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-512173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-512173 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-512173 event: Registered Node embed-certs-512173 in Controller
	
	
	==> dmesg <==
	[Apr25 20:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062050] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049471] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.203093] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.631235] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.776532] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.526277] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.065973] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074866] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.213124] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.135683] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.321357] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[Apr25 20:04] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.068831] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.406962] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +5.624452] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.964307] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +1.747342] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.728604] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] <==
	{"level":"info","ts":"2024-04-25T20:04:04.568721Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b162f841703ff885","local-member-id":"856b77cd5251110c","added-peer-id":"856b77cd5251110c","added-peer-peer-urls":["https://192.168.50.7:2380"]}
	{"level":"info","ts":"2024-04-25T20:04:04.568849Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b162f841703ff885","local-member-id":"856b77cd5251110c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T20:04:04.568981Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T20:04:04.57416Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-25T20:04:04.574418Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"856b77cd5251110c","initial-advertise-peer-urls":["https://192.168.50.7:2380"],"listen-peer-urls":["https://192.168.50.7:2380"],"advertise-client-urls":["https://192.168.50.7:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.7:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-25T20:04:04.574482Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-25T20:04:04.574577Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-04-25T20:04:04.57461Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-04-25T20:04:06.046963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-25T20:04:06.047071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-25T20:04:06.047124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgPreVoteResp from 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-04-25T20:04:06.047159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became candidate at term 3"}
	{"level":"info","ts":"2024-04-25T20:04:06.047183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 3"}
	{"level":"info","ts":"2024-04-25T20:04:06.04721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became leader at term 3"}
	{"level":"info","ts":"2024-04-25T20:04:06.047235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 3"}
	{"level":"info","ts":"2024-04-25T20:04:06.093136Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T20:04:06.094101Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"856b77cd5251110c","local-member-attributes":"{Name:embed-certs-512173 ClientURLs:[https://192.168.50.7:2379]}","request-path":"/0/members/856b77cd5251110c/attributes","cluster-id":"b162f841703ff885","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T20:04:06.094273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T20:04:06.094676Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T20:04:06.094718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T20:04:06.096329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-25T20:04:06.098033Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.7:2379"}
	{"level":"info","ts":"2024-04-25T20:14:06.142394Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2024-04-25T20:14:06.155115Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":841,"took":"11.595431ms","hash":4141422069,"current-db-size-bytes":2617344,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2617344,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-04-25T20:14:06.155211Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4141422069,"revision":841,"compact-revision":-1}
	
	
	==> kernel <==
	 20:17:38 up 14 min,  0 users,  load average: 0.16, 0.20, 0.16
	Linux embed-certs-512173 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] <==
	I0425 20:12:08.527631       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:14:07.531263       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:14:07.531418       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0425 20:14:08.532028       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:14:08.532251       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:14:08.532317       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:14:08.532185       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:14:08.532426       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:14:08.534456       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:15:08.532543       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:15:08.532775       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:15:08.532807       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:15:08.534564       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:15:08.534719       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:15:08.535061       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:17:08.534021       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:17:08.534084       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:17:08.534093       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:17:08.535302       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:17:08.535483       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:17:08.535526       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] <==
	I0425 20:11:52.675567       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:12:22.026854       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:12:22.684173       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:12:52.032511       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:12:52.693180       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:13:22.039240       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:13:22.701287       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:13:52.044609       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:13:52.709332       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:14:22.051395       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:14:22.717690       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:14:52.057576       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:14:52.725520       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:15:22.063193       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:15:22.733422       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0425 20:15:24.427270       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="219.073µs"
	I0425 20:15:39.425011       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="110.294µs"
	E0425 20:15:52.069072       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:15:52.744320       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:16:22.074678       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:16:22.752780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:16:52.080876       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:16:52.765589       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:17:22.086829       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:17:22.776495       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] <==
	I0425 20:04:09.010868       1 server_linux.go:69] "Using iptables proxy"
	I0425 20:04:09.019092       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.7"]
	I0425 20:04:09.061614       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 20:04:09.061642       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 20:04:09.061656       1 server_linux.go:165] "Using iptables Proxier"
	I0425 20:04:09.065135       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 20:04:09.065366       1 server.go:872] "Version info" version="v1.30.0"
	I0425 20:04:09.065418       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 20:04:09.066642       1 config.go:192] "Starting service config controller"
	I0425 20:04:09.067743       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 20:04:09.067499       1 config.go:319] "Starting node config controller"
	I0425 20:04:09.067992       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 20:04:09.067216       1 config.go:101] "Starting endpoint slice config controller"
	I0425 20:04:09.068213       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 20:04:09.169006       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 20:04:09.169081       1 shared_informer.go:320] Caches are synced for service config
	I0425 20:04:09.169324       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] <==
	I0425 20:04:05.303751       1 serving.go:380] Generated self-signed cert in-memory
	W0425 20:04:07.470863       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0425 20:04:07.470994       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 20:04:07.471013       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0425 20:04:07.471019       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0425 20:04:07.544638       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0425 20:04:07.544688       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 20:04:07.547143       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0425 20:04:07.547428       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0425 20:04:07.547553       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0425 20:04:07.547670       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0425 20:04:07.648014       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 25 20:15:10 embed-certs-512173 kubelet[953]: E0425 20:15:10.446064     953 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 25 20:15:10 embed-certs-512173 kubelet[953]: E0425 20:15:10.446157     953 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 25 20:15:10 embed-certs-512173 kubelet[953]: E0425 20:15:10.446462     953 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97tkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-mlkqr_kube-system(85113896-4f9c-4b53-8bc9-c138b8a643fc): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 25 20:15:10 embed-certs-512173 kubelet[953]: E0425 20:15:10.446508     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:15:24 embed-certs-512173 kubelet[953]: E0425 20:15:24.408291     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:15:39 embed-certs-512173 kubelet[953]: E0425 20:15:39.410278     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:15:50 embed-certs-512173 kubelet[953]: E0425 20:15:50.407709     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:16:01 embed-certs-512173 kubelet[953]: E0425 20:16:01.408235     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:16:03 embed-certs-512173 kubelet[953]: E0425 20:16:03.442895     953 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:16:03 embed-certs-512173 kubelet[953]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:16:03 embed-certs-512173 kubelet[953]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:16:03 embed-certs-512173 kubelet[953]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:16:03 embed-certs-512173 kubelet[953]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:16:16 embed-certs-512173 kubelet[953]: E0425 20:16:16.409627     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:16:29 embed-certs-512173 kubelet[953]: E0425 20:16:29.408113     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:16:40 embed-certs-512173 kubelet[953]: E0425 20:16:40.408277     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:16:52 embed-certs-512173 kubelet[953]: E0425 20:16:52.407802     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:17:03 embed-certs-512173 kubelet[953]: E0425 20:17:03.439518     953 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:17:03 embed-certs-512173 kubelet[953]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:17:03 embed-certs-512173 kubelet[953]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:17:03 embed-certs-512173 kubelet[953]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:17:03 embed-certs-512173 kubelet[953]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:17:06 embed-certs-512173 kubelet[953]: E0425 20:17:06.407688     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:17:20 embed-certs-512173 kubelet[953]: E0425 20:17:20.410057     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:17:32 embed-certs-512173 kubelet[953]: E0425 20:17:32.408563     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	
	
	==> storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] <==
	I0425 20:04:08.952379       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0425 20:04:38.955526       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] <==
	I0425 20:04:39.726353       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0425 20:04:39.735143       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0425 20:04:39.736194       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0425 20:04:57.138090       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0425 20:04:57.138319       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-512173_f2e61450-39de-4da9-bd72-e7b218a0ab19!
	I0425 20:04:57.140852       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f560c90-9231-48af-a706-8beaa9fbf6e0", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-512173_f2e61450-39de-4da9-bd72-e7b218a0ab19 became leader
	I0425 20:04:57.239218       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-512173_f2e61450-39de-4da9-bd72-e7b218a0ab19!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-512173 -n embed-certs-512173
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-512173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-mlkqr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-512173 describe pod metrics-server-569cc877fc-mlkqr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-512173 describe pod metrics-server-569cc877fc-mlkqr: exit status 1 (61.618168ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-mlkqr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-512173 describe pod metrics-server-569cc877fc-mlkqr: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:11:48.449053   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:12:34.754292   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:12:57.270757   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:13:11.493725   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:13:21.359410   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:13:27.582601   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:13:36.328601   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:14:20.314345   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:14:44.404238   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:14:50.628685   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:14:55.065212   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:15:12.602707   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:15:45.438669   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:16:11.710289   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:16:39.378973   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:16:48.449127   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:17:57.270070   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:18:21.358907   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:18:27.582893   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:18:36.328130   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:19:55.064954   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:20:12.603218   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:20:45.438798   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210442 -n old-k8s-version-210442
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 2 (249.343758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-210442" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 2 (238.472834ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-210442 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-210442 logs -n 25: (1.664594012s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-120641 sudo cat                             | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo find                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo crio                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-120641                                      | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113000 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:54 UTC |
	|         | disable-driver-mounts-113000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-512173            | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-744552             | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142196  | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210442        | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-512173                 | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-744552                  | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142196       | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:07 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210442             | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:59:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:59:17.353932   72712 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:59:17.354045   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354055   72712 out.go:304] Setting ErrFile to fd 2...
	I0425 19:59:17.354059   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354269   72712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:59:17.354795   72712 out.go:298] Setting JSON to false
	I0425 19:59:17.355681   72712 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6103,"bootTime":1714069054,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:59:17.355740   72712 start.go:139] virtualization: kvm guest
	I0425 19:59:17.357921   72712 out.go:177] * [old-k8s-version-210442] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:59:17.359325   72712 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:59:17.360640   72712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:59:17.359305   72712 notify.go:220] Checking for updates...
	I0425 19:59:17.361801   72712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:59:17.363086   72712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:59:17.364512   72712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:59:17.365842   72712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:59:17.367508   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 19:59:17.367909   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.367946   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.382995   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0425 19:59:17.383362   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.383991   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.384016   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.384378   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.384566   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.386317   72712 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0425 19:59:17.387599   72712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:59:17.387904   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.387948   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.402999   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0425 19:59:17.403506   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.403962   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.403986   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.404318   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.404472   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.438308   72712 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:59:17.439686   72712 start.go:297] selected driver: kvm2
	I0425 19:59:17.439716   72712 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.439831   72712 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:59:17.440486   72712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.440553   72712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:59:17.454719   72712 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:59:17.455114   72712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:59:17.455184   72712 cni.go:84] Creating CNI manager for ""
	I0425 19:59:17.455203   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:59:17.455266   72712 start.go:340] cluster config:
	{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.455393   72712 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.457210   72712 out.go:177] * Starting "old-k8s-version-210442" primary control-plane node in "old-k8s-version-210442" cluster
	I0425 19:59:18.474583   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:17.458384   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 19:59:17.458418   72712 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:59:17.458430   72712 cache.go:56] Caching tarball of preloaded images
	I0425 19:59:17.458517   72712 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:59:17.458529   72712 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0425 19:59:17.458638   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 19:59:17.458844   72712 start.go:360] acquireMachinesLock for old-k8s-version-210442: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:59:24.554517   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:27.626446   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:33.706451   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:36.778527   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:42.858471   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:45.930403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:52.010482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:55.082403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:01.162466   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:04.234537   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:10.314506   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:13.386463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:19.466523   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:22.538461   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:28.622423   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:31.690489   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:37.770534   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:40.842458   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:46.922463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:49.994524   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:56.074478   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:59.146487   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:05.226452   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:08.298480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:14.378455   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:17.450469   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:23.530513   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:26.602470   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:32.682497   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:35.754500   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:41.834480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:44.906482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:50.986468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:54.058502   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:00.138459   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:03.210554   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:09.290491   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:12.362472   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:18.442476   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:21.514468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.599158   72220 start.go:364] duration metric: took 4m21.632012686s to acquireMachinesLock for "no-preload-744552"
	I0425 20:02:30.599206   72220 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:30.599212   72220 fix.go:54] fixHost starting: 
	I0425 20:02:30.599516   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:30.599545   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:30.614130   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0425 20:02:30.614502   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:30.614962   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:02:30.614979   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:30.615306   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:30.615513   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:30.615640   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:02:30.617129   72220 fix.go:112] recreateIfNeeded on no-preload-744552: state=Stopped err=<nil>
	I0425 20:02:30.617150   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	W0425 20:02:30.617300   72220 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:30.619253   72220 out.go:177] * Restarting existing kvm2 VM for "no-preload-744552" ...
	I0425 20:02:27.594454   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.596600   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:30.596654   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.596986   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:02:30.597016   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.597206   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:02:30.599042   71966 machine.go:97] duration metric: took 4m44.620242563s to provisionDockerMachine
	I0425 20:02:30.599079   71966 fix.go:56] duration metric: took 4m44.639860566s for fixHost
	I0425 20:02:30.599085   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 4m44.639890108s
	W0425 20:02:30.599104   71966 start.go:713] error starting host: provision: host is not running
	W0425 20:02:30.599182   71966 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0425 20:02:30.599192   71966 start.go:728] Will try again in 5 seconds ...
	I0425 20:02:30.620801   72220 main.go:141] libmachine: (no-preload-744552) Calling .Start
	I0425 20:02:30.620978   72220 main.go:141] libmachine: (no-preload-744552) Ensuring networks are active...
	I0425 20:02:30.621640   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network default is active
	I0425 20:02:30.621965   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network mk-no-preload-744552 is active
	I0425 20:02:30.622317   72220 main.go:141] libmachine: (no-preload-744552) Getting domain xml...
	I0425 20:02:30.623010   72220 main.go:141] libmachine: (no-preload-744552) Creating domain...
	I0425 20:02:31.809967   72220 main.go:141] libmachine: (no-preload-744552) Waiting to get IP...
	I0425 20:02:31.810856   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:31.811353   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:31.811403   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:31.811308   73381 retry.go:31] will retry after 294.641704ms: waiting for machine to come up
	I0425 20:02:32.107955   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.108508   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.108542   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.108449   73381 retry.go:31] will retry after 373.307428ms: waiting for machine to come up
	I0425 20:02:32.483111   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.483590   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.483619   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.483546   73381 retry.go:31] will retry after 484.455862ms: waiting for machine to come up
	I0425 20:02:32.969188   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.969657   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.969694   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.969602   73381 retry.go:31] will retry after 382.359725ms: waiting for machine to come up
	I0425 20:02:33.353143   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.353598   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.353621   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.353550   73381 retry.go:31] will retry after 515.389674ms: waiting for machine to come up
	I0425 20:02:35.602273   71966 start.go:360] acquireMachinesLock for embed-certs-512173: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 20:02:33.870172   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.870652   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.870676   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.870603   73381 retry.go:31] will retry after 714.032032ms: waiting for machine to come up
	I0425 20:02:34.586478   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:34.586833   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:34.586861   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:34.586791   73381 retry.go:31] will retry after 1.005122465s: waiting for machine to come up
	I0425 20:02:35.593962   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:35.594367   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:35.594400   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:35.594310   73381 retry.go:31] will retry after 1.483740326s: waiting for machine to come up
	I0425 20:02:37.079306   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:37.079751   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:37.079784   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:37.079700   73381 retry.go:31] will retry after 1.828802911s: waiting for machine to come up
	I0425 20:02:38.910631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:38.911138   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:38.911163   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:38.911086   73381 retry.go:31] will retry after 1.528405609s: waiting for machine to come up
	I0425 20:02:40.441741   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:40.442251   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:40.442277   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:40.442200   73381 retry.go:31] will retry after 2.817901976s: waiting for machine to come up
	I0425 20:02:43.263903   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:43.264376   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:43.264408   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:43.264324   73381 retry.go:31] will retry after 2.258888981s: waiting for machine to come up
	I0425 20:02:45.525701   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:45.526139   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:45.526168   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:45.526106   73381 retry.go:31] will retry after 4.008258204s: waiting for machine to come up
	I0425 20:02:50.951421   72304 start.go:364] duration metric: took 4m34.5614094s to acquireMachinesLock for "default-k8s-diff-port-142196"
	I0425 20:02:50.951491   72304 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:50.951500   72304 fix.go:54] fixHost starting: 
	I0425 20:02:50.951906   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:50.951944   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:50.968074   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33481
	I0425 20:02:50.968452   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:50.968862   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:02:50.968886   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:50.969238   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:50.969460   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:02:50.969622   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:02:50.971100   72304 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142196: state=Stopped err=<nil>
	I0425 20:02:50.971125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	W0425 20:02:50.971271   72304 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:50.974623   72304 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142196" ...
	I0425 20:02:50.975991   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Start
	I0425 20:02:50.976154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring networks are active...
	I0425 20:02:50.976794   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network default is active
	I0425 20:02:50.977111   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network mk-default-k8s-diff-port-142196 is active
	I0425 20:02:50.977490   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Getting domain xml...
	I0425 20:02:50.978200   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Creating domain...
	I0425 20:02:49.538522   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.538999   72220 main.go:141] libmachine: (no-preload-744552) Found IP for machine: 192.168.72.142
	I0425 20:02:49.539033   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has current primary IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.539043   72220 main.go:141] libmachine: (no-preload-744552) Reserving static IP address...
	I0425 20:02:49.539420   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.539458   72220 main.go:141] libmachine: (no-preload-744552) DBG | skip adding static IP to network mk-no-preload-744552 - found existing host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"}
	I0425 20:02:49.539469   72220 main.go:141] libmachine: (no-preload-744552) Reserved static IP address: 192.168.72.142
	I0425 20:02:49.539483   72220 main.go:141] libmachine: (no-preload-744552) Waiting for SSH to be available...
	I0425 20:02:49.539490   72220 main.go:141] libmachine: (no-preload-744552) DBG | Getting to WaitForSSH function...
	I0425 20:02:49.541631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542042   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.542073   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542221   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH client type: external
	I0425 20:02:49.542270   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa (-rw-------)
	I0425 20:02:49.542300   72220 main.go:141] libmachine: (no-preload-744552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:02:49.542316   72220 main.go:141] libmachine: (no-preload-744552) DBG | About to run SSH command:
	I0425 20:02:49.542334   72220 main.go:141] libmachine: (no-preload-744552) DBG | exit 0
	I0425 20:02:49.670034   72220 main.go:141] libmachine: (no-preload-744552) DBG | SSH cmd err, output: <nil>: 
	I0425 20:02:49.670414   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetConfigRaw
	I0425 20:02:49.671039   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:49.673279   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673592   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.673629   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673878   72220 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/config.json ...
	I0425 20:02:49.674066   72220 machine.go:94] provisionDockerMachine start ...
	I0425 20:02:49.674083   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:49.674317   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.676767   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677084   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.677115   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677238   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.677413   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677562   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677698   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.677841   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.678037   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.678049   72220 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:02:49.790734   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:02:49.790764   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791028   72220 buildroot.go:166] provisioning hostname "no-preload-744552"
	I0425 20:02:49.791061   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791248   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.793907   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794279   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.794313   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794450   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.794649   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794787   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794908   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.795054   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.795256   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.795277   72220 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-744552 && echo "no-preload-744552" | sudo tee /etc/hostname
	I0425 20:02:49.925459   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744552
	
	I0425 20:02:49.925483   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.928282   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928646   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.928680   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928831   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.929012   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929194   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929327   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.929481   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.929679   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.929709   72220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-744552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-744552/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-744552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:02:50.052805   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:50.052841   72220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:02:50.052861   72220 buildroot.go:174] setting up certificates
	I0425 20:02:50.052875   72220 provision.go:84] configureAuth start
	I0425 20:02:50.052887   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:50.053193   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.055800   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056145   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.056168   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056339   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.058090   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058395   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.058429   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058526   72220 provision.go:143] copyHostCerts
	I0425 20:02:50.058577   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:02:50.058587   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:02:50.058647   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:02:50.058742   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:02:50.058750   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:02:50.058774   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:02:50.058827   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:02:50.058834   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:02:50.058855   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:02:50.058904   72220 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.no-preload-744552 san=[127.0.0.1 192.168.72.142 localhost minikube no-preload-744552]
	I0425 20:02:50.247711   72220 provision.go:177] copyRemoteCerts
	I0425 20:02:50.247768   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:02:50.247792   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.250146   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250560   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.250600   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250780   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.250978   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.251128   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.251272   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.338105   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:02:50.365554   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 20:02:50.391433   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:02:50.416606   72220 provision.go:87] duration metric: took 363.720332ms to configureAuth
	I0425 20:02:50.416627   72220 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:02:50.416795   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:02:50.416876   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.419385   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419731   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.419764   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419903   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.420079   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420322   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420557   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.420724   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.420909   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.420929   72220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:02:50.702065   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:02:50.702104   72220 machine.go:97] duration metric: took 1.028026584s to provisionDockerMachine
	I0425 20:02:50.702117   72220 start.go:293] postStartSetup for "no-preload-744552" (driver="kvm2")
	I0425 20:02:50.702131   72220 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:02:50.702165   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.702531   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:02:50.702572   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.705595   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.705948   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.705992   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.706173   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.706367   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.706588   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.706759   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.794791   72220 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:02:50.799592   72220 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:02:50.799621   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:02:50.799701   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:02:50.799799   72220 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:02:50.799913   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:02:50.810796   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:02:50.836919   72220 start.go:296] duration metric: took 134.787005ms for postStartSetup
	I0425 20:02:50.836972   72220 fix.go:56] duration metric: took 20.237758066s for fixHost
	I0425 20:02:50.836995   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.839818   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840295   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.840325   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840429   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.840600   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840752   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840929   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.841079   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.841307   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.841338   72220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:02:50.951251   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075370.921171901
	
	I0425 20:02:50.951272   72220 fix.go:216] guest clock: 1714075370.921171901
	I0425 20:02:50.951279   72220 fix.go:229] Guest: 2024-04-25 20:02:50.921171901 +0000 UTC Remote: 2024-04-25 20:02:50.836976462 +0000 UTC m=+282.018789867 (delta=84.195439ms)
	I0425 20:02:50.951312   72220 fix.go:200] guest clock delta is within tolerance: 84.195439ms
	I0425 20:02:50.951321   72220 start.go:83] releasing machines lock for "no-preload-744552", held for 20.352126868s
	I0425 20:02:50.951348   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.951612   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.954231   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954614   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.954638   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954821   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955240   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955419   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955492   72220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:02:50.955540   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.955659   72220 ssh_runner.go:195] Run: cat /version.json
	I0425 20:02:50.955688   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.958155   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958476   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958517   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958541   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958661   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.958808   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.958903   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958932   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.958935   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.959045   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.959181   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.959192   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.959360   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.959471   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:51.066809   72220 ssh_runner.go:195] Run: systemctl --version
	I0425 20:02:51.073198   72220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:02:51.228547   72220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:02:51.236443   72220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:02:51.236518   72220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:02:51.256226   72220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:02:51.256244   72220 start.go:494] detecting cgroup driver to use...
	I0425 20:02:51.256307   72220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:02:51.278596   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:02:51.295692   72220 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:02:51.295751   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:02:51.310940   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:02:51.326072   72220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:02:51.459064   72220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:02:51.614563   72220 docker.go:233] disabling docker service ...
	I0425 20:02:51.614639   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:02:51.638817   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:02:51.658265   72220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:02:51.818412   72220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:02:51.943830   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:02:51.960672   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:02:51.982028   72220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:02:51.982090   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:51.994990   72220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:02:51.995079   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.007907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.020225   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.033306   72220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:02:52.046241   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.058282   72220 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.078907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.090258   72220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:02:52.100796   72220 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:02:52.100873   72220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:02:52.115600   72220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:02:52.125458   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:02:52.288142   72220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:02:52.430252   72220 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:02:52.430353   72220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:02:52.436493   72220 start.go:562] Will wait 60s for crictl version
	I0425 20:02:52.436565   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.441427   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:02:52.479709   72220 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:02:52.479810   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.512180   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.545115   72220 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:02:52.546476   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:52.549314   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549723   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:52.549759   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549926   72220 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0425 20:02:52.554924   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:02:52.568804   72220 kubeadm.go:877] updating cluster {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:02:52.568958   72220 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:02:52.568997   72220 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:02:52.609095   72220 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:02:52.609117   72220 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:02:52.609156   72220 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.609188   72220 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.609185   72220 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.609214   72220 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.609227   72220 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.609256   72220 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.609334   72220 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.609370   72220 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610726   72220 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.610747   72220 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610772   72220 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.610724   72220 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.610800   72220 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.610807   72220 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.611075   72220 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.611096   72220 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.753069   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0425 20:02:52.771762   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.825052   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908030   72220 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0425 20:02:52.908082   72220 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.908113   72220 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0425 20:02:52.908127   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.908135   72220 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908164   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.915126   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.915132   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.967834   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.969385   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.973718   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0425 20:02:52.973787   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0425 20:02:52.973823   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:52.973870   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:52.985763   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.986695   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.068153   72220 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0425 20:02:53.068196   72220 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.068269   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099237   72220 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0425 20:02:53.099257   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0425 20:02:53.099274   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099290   72220 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:53.099294   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0425 20:02:53.099330   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099368   72220 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0425 20:02:53.099401   72220 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:53.099433   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099333   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.115478   72220 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0425 20:02:53.115523   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.115526   72220 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.115610   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.550328   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.240552   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting to get IP...
	I0425 20:02:52.241327   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241657   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241757   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.241648   73527 retry.go:31] will retry after 195.006273ms: waiting for machine to come up
	I0425 20:02:52.438154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438702   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438726   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.438657   73527 retry.go:31] will retry after 365.911905ms: waiting for machine to come up
	I0425 20:02:52.806281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806793   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806826   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.806727   73527 retry.go:31] will retry after 448.572137ms: waiting for machine to come up
	I0425 20:02:53.257396   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257935   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257966   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.257889   73527 retry.go:31] will retry after 560.886917ms: waiting for machine to come up
	I0425 20:02:53.820527   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820954   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820979   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.820915   73527 retry.go:31] will retry after 514.294303ms: waiting for machine to come up
	I0425 20:02:54.336706   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337129   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:54.337101   73527 retry.go:31] will retry after 853.040726ms: waiting for machine to come up
	I0425 20:02:55.192349   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192857   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:55.192774   73527 retry.go:31] will retry after 1.17554782s: waiting for machine to come up
	I0425 20:02:56.232794   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.133436829s)
	I0425 20:02:56.232845   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0425 20:02:56.232854   72220 ssh_runner.go:235] Completed: which crictl: (3.133373607s)
	I0425 20:02:56.232875   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232915   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232961   72220 ssh_runner.go:235] Completed: which crictl: (3.133515676s)
	I0425 20:02:56.232919   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:56.233011   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:56.233050   72220 ssh_runner.go:235] Completed: which crictl: (3.11742497s)
	I0425 20:02:56.233089   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:56.233126   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (3.117580594s)
	I0425 20:02:56.233160   72220 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.6828061s)
	I0425 20:02:56.233167   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0425 20:02:56.233207   72220 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0425 20:02:56.233242   72220 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:56.233248   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:56.233284   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:56.323764   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0425 20:02:56.323884   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:02:56.323906   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0425 20:02:56.323989   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:02:58.553707   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.320762887s)
	I0425 20:02:58.553742   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0425 20:02:58.553768   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.320739179s)
	I0425 20:02:58.553784   72220 ssh_runner.go:235] Completed: which crictl: (2.320487912s)
	I0425 20:02:58.553807   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0425 20:02:58.553838   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:58.553864   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.320587538s)
	I0425 20:02:58.553889   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:02:58.553909   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0425 20:02:58.553948   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.229944417s)
	I0425 20:02:58.553959   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553989   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0425 20:02:58.554009   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553910   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.23000183s)
	I0425 20:02:58.554069   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0425 20:02:58.602692   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0425 20:02:58.602694   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0425 20:02:58.602819   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:02:56.369693   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370169   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:56.370115   73527 retry.go:31] will retry after 1.260629487s: waiting for machine to come up
	I0425 20:02:57.632705   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633187   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:57.633150   73527 retry.go:31] will retry after 1.291948113s: waiting for machine to come up
	I0425 20:02:58.926675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927167   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927196   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:58.927111   73527 retry.go:31] will retry after 1.869565597s: waiting for machine to come up
	I0425 20:03:00.799357   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799820   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799850   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:00.799750   73527 retry.go:31] will retry after 2.157801293s: waiting for machine to come up
	I0425 20:03:00.027830   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.473790165s)
	I0425 20:03:00.027869   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0425 20:03:00.027895   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027943   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027842   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.424998268s)
	I0425 20:03:00.027985   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0425 20:03:02.204218   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.176247608s)
	I0425 20:03:02.204254   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0425 20:03:02.204290   72220 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.204335   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.959407   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959789   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959812   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:02.959745   73527 retry.go:31] will retry after 2.617480271s: waiting for machine to come up
	I0425 20:03:05.579300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:05.579775   73527 retry.go:31] will retry after 4.058370199s: waiting for machine to come up
	I0425 20:03:06.132743   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.928385447s)
	I0425 20:03:06.132779   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0425 20:03:06.132805   72220 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:06.132857   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:08.314803   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.181910584s)
	I0425 20:03:08.314842   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0425 20:03:08.314881   72220 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:08.314930   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:11.255486   72712 start.go:364] duration metric: took 3m53.796595105s to acquireMachinesLock for "old-k8s-version-210442"
	I0425 20:03:11.255550   72712 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:11.255569   72712 fix.go:54] fixHost starting: 
	I0425 20:03:11.256083   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:11.256128   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:11.272950   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0425 20:03:11.273365   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:11.273878   72712 main.go:141] libmachine: Using API Version  1
	I0425 20:03:11.273907   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:11.274277   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:11.274487   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:11.274666   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetState
	I0425 20:03:11.276420   72712 fix.go:112] recreateIfNeeded on old-k8s-version-210442: state=Stopped err=<nil>
	I0425 20:03:11.276454   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	W0425 20:03:11.276608   72712 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:11.279156   72712 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210442" ...
	I0425 20:03:09.639300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639833   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Found IP for machine: 192.168.39.123
	I0425 20:03:09.639867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has current primary IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserving static IP address...
	I0425 20:03:09.640257   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.640281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | skip adding static IP to network mk-default-k8s-diff-port-142196 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"}
	I0425 20:03:09.640300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserved static IP address: 192.168.39.123
	I0425 20:03:09.640313   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for SSH to be available...
	I0425 20:03:09.640321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Getting to WaitForSSH function...
	I0425 20:03:09.643058   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643371   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.643400   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643506   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH client type: external
	I0425 20:03:09.643557   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa (-rw-------)
	I0425 20:03:09.643586   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:09.643609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | About to run SSH command:
	I0425 20:03:09.643618   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | exit 0
	I0425 20:03:09.766707   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:09.767091   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetConfigRaw
	I0425 20:03:09.767818   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:09.770573   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771012   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.771047   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771296   72304 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/config.json ...
	I0425 20:03:09.771580   72304 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:09.771609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:09.771884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.774255   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.774699   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774866   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.775044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775213   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775362   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.775520   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.775781   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.775797   72304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:09.884259   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:09.884288   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884519   72304 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142196"
	I0425 20:03:09.884547   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884747   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.887391   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.887798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.887829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.888003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.888215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888542   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.888703   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.888918   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.888934   72304 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142196 && echo "default-k8s-diff-port-142196" | sudo tee /etc/hostname
	I0425 20:03:10.015919   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142196
	
	I0425 20:03:10.015951   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.018640   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.018955   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.018987   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.019201   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.019398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019729   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.019906   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.020098   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.020120   72304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142196' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142196/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142196' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:10.145789   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:10.145822   72304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:10.145873   72304 buildroot.go:174] setting up certificates
	I0425 20:03:10.145886   72304 provision.go:84] configureAuth start
	I0425 20:03:10.145899   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:10.146185   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:10.148943   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149309   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.149342   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149492   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.152000   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152418   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.152445   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152621   72304 provision.go:143] copyHostCerts
	I0425 20:03:10.152681   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:10.152693   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:10.152758   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:10.152890   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:10.152905   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:10.152940   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:10.153033   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:10.153044   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:10.153072   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:10.153145   72304 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142196 san=[127.0.0.1 192.168.39.123 default-k8s-diff-port-142196 localhost minikube]
	I0425 20:03:10.572412   72304 provision.go:177] copyRemoteCerts
	I0425 20:03:10.572473   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:10.572496   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.575083   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.575421   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.575696   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.575799   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.575916   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:10.657850   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:10.685493   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0425 20:03:10.713230   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:10.740577   72304 provision.go:87] duration metric: took 594.674196ms to configureAuth
	I0425 20:03:10.740604   72304 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:10.740835   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:10.740916   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.743709   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744039   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.744071   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744236   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.744434   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744621   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744723   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.744901   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.745065   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.745083   72304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:11.017816   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:11.017844   72304 machine.go:97] duration metric: took 1.24624593s to provisionDockerMachine
	I0425 20:03:11.017858   72304 start.go:293] postStartSetup for "default-k8s-diff-port-142196" (driver="kvm2")
	I0425 20:03:11.017871   72304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:11.017892   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.018195   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:11.018231   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.020759   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021067   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.021092   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.021403   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.021600   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.021729   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.106290   72304 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:11.111532   72304 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:11.111560   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:11.111645   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:11.111744   72304 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:11.111856   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:11.122216   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:11.150472   72304 start.go:296] duration metric: took 132.600197ms for postStartSetup
	I0425 20:03:11.150520   72304 fix.go:56] duration metric: took 20.199020729s for fixHost
	I0425 20:03:11.150544   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.153466   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.153798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.153824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.154055   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.154289   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154483   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154635   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.154824   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:11.154991   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:11.155001   72304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:11.255330   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075391.221756501
	
	I0425 20:03:11.255357   72304 fix.go:216] guest clock: 1714075391.221756501
	I0425 20:03:11.255365   72304 fix.go:229] Guest: 2024-04-25 20:03:11.221756501 +0000 UTC Remote: 2024-04-25 20:03:11.15052524 +0000 UTC m=+294.908822896 (delta=71.231261ms)
	I0425 20:03:11.255384   72304 fix.go:200] guest clock delta is within tolerance: 71.231261ms
	I0425 20:03:11.255388   72304 start.go:83] releasing machines lock for "default-k8s-diff-port-142196", held for 20.303917474s
	I0425 20:03:11.255419   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.255700   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:11.258740   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259076   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.259104   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259414   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.259906   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260102   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260197   72304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:11.260241   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.260350   72304 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:11.260374   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.262843   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263001   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263216   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263245   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263365   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263480   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263669   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263679   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263864   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264026   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264039   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.264203   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.280701   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .Start
	I0425 20:03:11.280895   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring networks are active...
	I0425 20:03:11.281729   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network default is active
	I0425 20:03:11.282158   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network mk-old-k8s-version-210442 is active
	I0425 20:03:11.282639   72712 main.go:141] libmachine: (old-k8s-version-210442) Getting domain xml...
	I0425 20:03:11.283399   72712 main.go:141] libmachine: (old-k8s-version-210442) Creating domain...
	I0425 20:03:11.339564   72304 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:11.364667   72304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:11.526308   72304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:11.533487   72304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:11.533563   72304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:11.552090   72304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:11.552120   72304 start.go:494] detecting cgroup driver to use...
	I0425 20:03:11.552196   72304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:11.569573   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:11.584425   72304 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:11.584489   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:11.599083   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:11.613739   72304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:11.739574   72304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:11.911318   72304 docker.go:233] disabling docker service ...
	I0425 20:03:11.911390   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:11.928743   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:11.946101   72304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:12.112740   72304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:12.246863   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:12.269551   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:12.298838   72304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:12.298907   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.312059   72304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:12.312113   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.324076   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.336239   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.350088   72304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:12.368362   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.385406   72304 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.407195   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.420065   72304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:12.431195   72304 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:12.431260   72304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:12.446263   72304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:12.457137   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:12.622756   72304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:12.799932   72304 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:12.800012   72304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:12.807795   72304 start.go:562] Will wait 60s for crictl version
	I0425 20:03:12.807862   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:03:12.813860   72304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:12.861249   72304 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:12.861327   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.896140   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.942768   72304 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:09.079550   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0425 20:03:09.079607   72220 cache_images.go:123] Successfully loaded all cached images
	I0425 20:03:09.079615   72220 cache_images.go:92] duration metric: took 16.470485982s to LoadCachedImages
	I0425 20:03:09.079629   72220 kubeadm.go:928] updating node { 192.168.72.142 8443 v1.30.0 crio true true} ...
	I0425 20:03:09.079764   72220 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-744552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:09.079839   72220 ssh_runner.go:195] Run: crio config
	I0425 20:03:09.139170   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:09.139194   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:09.139206   72220 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:09.139225   72220 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.142 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-744552 NodeName:no-preload-744552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:09.139365   72220 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-744552"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:09.139426   72220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:09.151828   72220 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:09.151884   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:09.163310   72220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0425 20:03:09.183132   72220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:09.203038   72220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0425 20:03:09.223717   72220 ssh_runner.go:195] Run: grep 192.168.72.142	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:09.228467   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:09.243976   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:09.361475   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:09.380862   72220 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552 for IP: 192.168.72.142
	I0425 20:03:09.380886   72220 certs.go:194] generating shared ca certs ...
	I0425 20:03:09.380901   72220 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:09.381076   72220 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:09.381132   72220 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:09.381147   72220 certs.go:256] generating profile certs ...
	I0425 20:03:09.381254   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/client.key
	I0425 20:03:09.381337   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key.a705cb96
	I0425 20:03:09.381392   72220 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key
	I0425 20:03:09.381538   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:09.381586   72220 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:09.381601   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:09.381638   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:09.381668   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:09.381702   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:09.381761   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:09.382459   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:09.423895   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:09.462481   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:09.491394   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:09.532779   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 20:03:09.569107   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 20:03:09.597381   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:09.623962   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:09.651141   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:09.677295   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:09.702404   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:09.729275   72220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:09.748421   72220 ssh_runner.go:195] Run: openssl version
	I0425 20:03:09.754848   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:09.768121   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774468   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774529   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.783568   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:09.799120   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:09.812983   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818660   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818740   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.826091   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:09.840115   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:09.853372   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858387   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858455   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.864693   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:09.876755   72220 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:09.882829   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:09.890219   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:09.897091   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:09.906017   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:09.913154   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:09.919989   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:09.926552   72220 kubeadm.go:391] StartCluster: {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:09.926671   72220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:09.926734   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:09.971983   72220 cri.go:89] found id: ""
	I0425 20:03:09.972071   72220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:09.983371   72220 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:09.983399   72220 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:09.983406   72220 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:09.983451   72220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:09.994047   72220 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:09.995080   72220 kubeconfig.go:125] found "no-preload-744552" server: "https://192.168.72.142:8443"
	I0425 20:03:09.997202   72220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:10.007666   72220 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.142
	I0425 20:03:10.007703   72220 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:10.007713   72220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:10.007752   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:10.049581   72220 cri.go:89] found id: ""
	I0425 20:03:10.049679   72220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:10.071032   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:10.083240   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:10.083267   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:10.083314   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:10.093444   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:10.093507   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:10.104291   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:10.114596   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:10.114659   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:10.125118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.138299   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:10.138362   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.152185   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:10.163493   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:10.163555   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:10.177214   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:10.188286   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:10.312536   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.497483   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.184911769s)
	I0425 20:03:11.497531   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.753732   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.871246   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.968366   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:11.968445   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.468885   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.968598   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:13.037502   72220 api_server.go:72] duration metric: took 1.069135698s to wait for apiserver process to appear ...
	I0425 20:03:13.037542   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:13.037568   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:13.038540   72220 api_server.go:269] stopped: https://192.168.72.142:8443/healthz: Get "https://192.168.72.142:8443/healthz": dial tcp 192.168.72.142:8443: connect: connection refused
	I0425 20:03:13.537713   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:12.944206   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:12.947412   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.947822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:12.947852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.948086   72304 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:12.953504   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:12.969171   72304 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:12.969344   72304 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:12.969402   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:13.016509   72304 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:13.016585   72304 ssh_runner.go:195] Run: which lz4
	I0425 20:03:13.022023   72304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:13.027861   72304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:13.027896   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:14.913405   72304 crio.go:462] duration metric: took 1.891428846s to copy over tarball
	I0425 20:03:14.913466   72304 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:12.659136   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting to get IP...
	I0425 20:03:12.660227   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.660770   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.660843   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.660724   73691 retry.go:31] will retry after 234.96602ms: waiting for machine to come up
	I0425 20:03:12.897395   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.897966   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.897993   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.897913   73691 retry.go:31] will retry after 387.692223ms: waiting for machine to come up
	I0425 20:03:13.287742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.288414   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.288443   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.288397   73691 retry.go:31] will retry after 461.897892ms: waiting for machine to come up
	I0425 20:03:13.752061   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.752574   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.752603   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.752513   73691 retry.go:31] will retry after 452.347315ms: waiting for machine to come up
	I0425 20:03:14.206275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.206684   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.206708   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.206629   73691 retry.go:31] will retry after 466.12355ms: waiting for machine to come up
	I0425 20:03:14.674265   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.674788   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.674818   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.674735   73691 retry.go:31] will retry after 697.70071ms: waiting for machine to come up
	I0425 20:03:15.373862   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:15.374297   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:15.374325   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:15.374252   73691 retry.go:31] will retry after 835.73273ms: waiting for machine to come up
	I0425 20:03:16.211394   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:16.211870   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:16.211902   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:16.211815   73691 retry.go:31] will retry after 1.26739043s: waiting for machine to come up
	I0425 20:03:16.441793   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.441829   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.441848   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.506023   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.506057   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.538293   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.544891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:16.544925   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.038519   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.049842   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.049883   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.538420   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.545891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.545929   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:18.038192   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:18.042957   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:03:18.063131   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:18.063171   72220 api_server.go:131] duration metric: took 5.025619242s to wait for apiserver health ...
	I0425 20:03:18.063182   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:18.063192   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:18.405047   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:18.552639   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:18.565507   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:18.591534   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:17.662135   72304 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.748640149s)
	I0425 20:03:17.662171   72304 crio.go:469] duration metric: took 2.748741671s to extract the tarball
	I0425 20:03:17.662184   72304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:17.706288   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:17.773537   72304 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:03:17.773565   72304 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:03:17.773575   72304 kubeadm.go:928] updating node { 192.168.39.123 8444 v1.30.0 crio true true} ...
	I0425 20:03:17.773709   72304 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142196 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:17.773799   72304 ssh_runner.go:195] Run: crio config
	I0425 20:03:17.836354   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:17.836379   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:17.836391   72304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:17.836411   72304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142196 NodeName:default-k8s-diff-port-142196 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:17.836545   72304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142196"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:17.836599   72304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:17.848441   72304 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:17.848506   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:17.860320   72304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0425 20:03:17.885528   72304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:17.905701   72304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0425 20:03:17.925064   72304 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:17.930085   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:17.944507   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:18.108208   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:18.134428   72304 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196 for IP: 192.168.39.123
	I0425 20:03:18.134456   72304 certs.go:194] generating shared ca certs ...
	I0425 20:03:18.134479   72304 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:18.134672   72304 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:18.134745   72304 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:18.134761   72304 certs.go:256] generating profile certs ...
	I0425 20:03:18.134870   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/client.key
	I0425 20:03:18.245553   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key.1fb61bcb
	I0425 20:03:18.245666   72304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key
	I0425 20:03:18.245833   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:18.245880   72304 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:18.245894   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:18.245934   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:18.245964   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:18.245997   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:18.246058   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:18.246994   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:18.293000   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:18.322296   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:18.358060   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:18.390999   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0425 20:03:18.420333   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:18.450050   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:18.477983   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:18.506030   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:18.538394   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:18.574361   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:18.610827   72304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:18.634141   72304 ssh_runner.go:195] Run: openssl version
	I0425 20:03:18.640647   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:18.653988   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659400   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659458   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.665868   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:18.679247   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:18.692272   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697356   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697410   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.703694   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:18.716412   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:18.733362   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739598   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739651   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.748175   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:18.764492   72304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:18.770594   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:18.777414   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:18.784614   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:18.793453   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:18.800721   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:18.807982   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:18.814836   72304 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:18.814942   72304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:18.814992   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.864771   72304 cri.go:89] found id: ""
	I0425 20:03:18.864834   72304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:18.878200   72304 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:18.878238   72304 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:18.878245   72304 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:18.878305   72304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:18.892071   72304 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:18.892973   72304 kubeconfig.go:125] found "default-k8s-diff-port-142196" server: "https://192.168.39.123:8444"
	I0425 20:03:18.894860   72304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:18.907959   72304 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.123
	I0425 20:03:18.907989   72304 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:18.907998   72304 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:18.908045   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.951245   72304 cri.go:89] found id: ""
	I0425 20:03:18.951311   72304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:18.980033   72304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:18.995453   72304 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:18.995473   72304 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:18.995524   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0425 20:03:19.007409   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:19.007470   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:19.019782   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0425 20:03:19.031410   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:19.031493   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:19.043439   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.055936   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:19.055999   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.067986   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0425 20:03:19.080785   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:19.080869   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:19.092802   72304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:19.105024   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.240077   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.259510   72304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.019382485s)
	I0425 20:03:20.259544   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.489833   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.599319   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.784451   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:20.784606   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.284759   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:17.480654   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:17.481045   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:17.481094   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:17.481007   73691 retry.go:31] will retry after 1.238487953s: waiting for machine to come up
	I0425 20:03:18.720512   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:18.720940   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:18.720965   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:18.720902   73691 retry.go:31] will retry after 2.277078909s: waiting for machine to come up
	I0425 20:03:20.999749   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:21.000275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:21.000305   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:21.000223   73691 retry.go:31] will retry after 2.81059851s: waiting for machine to come up
	I0425 20:03:18.940880   72220 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:18.983894   72220 system_pods.go:61] "coredns-7db6d8ff4d-67sp6" [0fc3ee18-e3fe-4f4a-a5bd-4d6e3497bfa3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:18.983953   72220 system_pods.go:61] "etcd-no-preload-744552" [f3768d08-4cc6-42aa-9d1c-b0fd5d6ffed5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:18.983975   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [9d927e1f-4ddb-4b54-b1f1-f5248cb51745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:18.983984   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [cc71ce6c-22ba-4189-99dc-dd2da6506d37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:18.983993   72220 system_pods.go:61] "kube-proxy-whkbk" [a22b51a9-4854-41f5-bb5a-a81920a09b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0425 20:03:18.984026   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [5f01cd76-d6b7-4033-9aa9-38cac91965d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:18.984037   72220 system_pods.go:61] "metrics-server-569cc877fc-6n2gd" [03283a78-d44f-4f60-9743-680c18aeace3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:18.984052   72220 system_pods.go:61] "storage-provisioner" [4211811e-85ce-4da2-bc16-16909c26ced7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0425 20:03:18.984064   72220 system_pods.go:74] duration metric: took 392.509163ms to wait for pod list to return data ...
	I0425 20:03:18.984077   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:18.989373   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:18.989405   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:18.989424   72220 node_conditions.go:105] duration metric: took 5.341625ms to run NodePressure ...
	I0425 20:03:18.989446   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.809313   72220 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818730   72220 kubeadm.go:733] kubelet initialised
	I0425 20:03:19.818753   72220 kubeadm.go:734] duration metric: took 9.41696ms waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818761   72220 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:19.825762   72220 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:21.834658   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:21.785434   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.855046   72304 api_server.go:72] duration metric: took 1.070594042s to wait for apiserver process to appear ...
	I0425 20:03:21.855127   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:21.855156   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:21.855709   72304 api_server.go:269] stopped: https://192.168.39.123:8444/healthz: Get "https://192.168.39.123:8444/healthz": dial tcp 192.168.39.123:8444: connect: connection refused
	I0425 20:03:22.355555   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.430068   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.430099   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.430115   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.487089   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.487124   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.855301   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.861270   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:24.861299   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.356007   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.360802   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.360839   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.855336   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.861719   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.861753   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:23.812963   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:23.813457   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:23.813476   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:23.813429   73691 retry.go:31] will retry after 2.508562986s: waiting for machine to come up
	I0425 20:03:26.323267   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:26.323733   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:26.323761   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:26.323699   73691 retry.go:31] will retry after 4.475703543s: waiting for machine to come up
	I0425 20:03:26.355254   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.360977   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.361011   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:26.855547   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.860178   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.860203   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.355819   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.360466   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:27.360491   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.856219   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.861706   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:03:27.868486   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:27.868525   72304 api_server.go:131] duration metric: took 6.013385579s to wait for apiserver health ...
	I0425 20:03:27.868536   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:27.868544   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:27.870534   72304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:24.335382   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:24.335415   72220 pod_ready.go:81] duration metric: took 4.509621487s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:24.335427   72220 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:26.342530   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:28.841444   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:27.871863   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:27.885767   72304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:27.910270   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:27.922984   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:27.923016   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:27.923024   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:27.923030   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:27.923036   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:27.923041   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:03:27.923052   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:27.923057   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:27.923061   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:03:27.923067   72304 system_pods.go:74] duration metric: took 12.774358ms to wait for pod list to return data ...
	I0425 20:03:27.923073   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:27.927553   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:27.927582   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:27.927596   72304 node_conditions.go:105] duration metric: took 4.517775ms to run NodePressure ...
	I0425 20:03:27.927616   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:28.213013   72304 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217836   72304 kubeadm.go:733] kubelet initialised
	I0425 20:03:28.217860   72304 kubeadm.go:734] duration metric: took 4.809ms waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217869   72304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:28.225122   72304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.229920   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229940   72304 pod_ready.go:81] duration metric: took 4.794976ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.229948   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229954   72304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.234362   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234380   72304 pod_ready.go:81] duration metric: took 4.417955ms for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.234388   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234394   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.238885   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238904   72304 pod_ready.go:81] duration metric: took 4.504378ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.238917   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238924   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.314420   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314446   72304 pod_ready.go:81] duration metric: took 75.511589ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.314457   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314464   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.714128   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714165   72304 pod_ready.go:81] duration metric: took 399.694231ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.714178   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714187   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.113925   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113958   72304 pod_ready.go:81] duration metric: took 399.760651ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.113971   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113977   72304 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.514107   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514132   72304 pod_ready.go:81] duration metric: took 400.147308ms for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.514142   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514149   72304 pod_ready.go:38] duration metric: took 1.296270699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:29.514167   72304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:03:29.528766   72304 ops.go:34] apiserver oom_adj: -16
	I0425 20:03:29.528791   72304 kubeadm.go:591] duration metric: took 10.650540723s to restartPrimaryControlPlane
	I0425 20:03:29.528801   72304 kubeadm.go:393] duration metric: took 10.713975851s to StartCluster
	I0425 20:03:29.528816   72304 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.528887   72304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:29.530674   72304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.530951   72304 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:03:29.532792   72304 out.go:177] * Verifying Kubernetes components...
	I0425 20:03:29.531039   72304 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:03:29.531203   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:29.534328   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:29.534349   72304 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534377   72304 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534383   72304 addons.go:243] addon metrics-server should already be in state true
	I0425 20:03:29.534331   72304 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534416   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534441   72304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142196"
	I0425 20:03:29.534334   72304 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534536   72304 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534549   72304 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:03:29.534584   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534786   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534814   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534839   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534815   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534956   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.535000   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.551165   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0425 20:03:29.551680   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552007   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0425 20:03:29.552399   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.552419   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.552445   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552864   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553003   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.553028   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.553066   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.553409   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553621   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0425 20:03:29.554006   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.554024   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.554057   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.554555   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.554579   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.554908   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.555432   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.555487   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.557216   72304 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.557238   72304 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:03:29.557267   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.557642   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.557675   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.570559   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0425 20:03:29.571013   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.571538   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.571562   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.571944   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.572152   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.574003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.576061   72304 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:03:29.575108   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33777
	I0425 20:03:29.575580   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0425 20:03:29.577356   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:03:29.577374   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:03:29.577394   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.577861   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.577964   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.578333   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578356   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578514   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578543   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578735   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578909   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578947   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.579603   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.579633   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.580871   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.582436   72304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:29.581297   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.581851   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.583941   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.583971   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.583994   72304 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.584021   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:03:29.584031   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.584044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.584282   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.584430   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.586538   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.586880   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.586901   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.587119   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.587314   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.587470   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.587560   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.595882   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0425 20:03:29.596234   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.596711   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.596728   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.597146   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.597321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.598599   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.598799   72304 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:29.598811   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:03:29.598822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.600829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.601149   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.601409   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.601479   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.601537   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.772228   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:29.799159   72304 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:29.893622   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:03:29.893647   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:03:29.895090   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.919651   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:03:29.919673   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:03:29.929992   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:30.004488   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:30.004519   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:03:30.061525   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.113425632s)
	I0425 20:03:31.043511   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.148338843s)
	I0425 20:03:31.043539   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043587   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043524   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043629   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043894   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043910   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043946   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.043953   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043964   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043973   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043992   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044107   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044159   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044199   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044209   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044219   72304 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-142196"
	I0425 20:03:31.044216   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044237   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044253   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044262   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044542   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044566   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044662   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044682   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.052429   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.052451   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.052675   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.052694   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.055680   72304 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0425 20:03:31.057271   72304 addons.go:505] duration metric: took 1.526243989s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0425 20:03:32.187768   71966 start.go:364] duration metric: took 56.585448027s to acquireMachinesLock for "embed-certs-512173"
	I0425 20:03:32.187838   71966 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:32.187849   71966 fix.go:54] fixHost starting: 
	I0425 20:03:32.188220   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:32.188266   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:32.207172   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0425 20:03:32.207627   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:32.208170   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:03:32.208196   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:32.208493   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:32.208700   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:32.208837   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:03:32.210552   71966 fix.go:112] recreateIfNeeded on embed-certs-512173: state=Stopped err=<nil>
	I0425 20:03:32.210577   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	W0425 20:03:32.210741   71966 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:32.213400   71966 out.go:177] * Restarting existing kvm2 VM for "embed-certs-512173" ...
	I0425 20:03:30.803467   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804014   72712 main.go:141] libmachine: (old-k8s-version-210442) Found IP for machine: 192.168.61.136
	I0425 20:03:30.804041   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserving static IP address...
	I0425 20:03:30.804057   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has current primary IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804495   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.804535   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | skip adding static IP to network mk-old-k8s-version-210442 - found existing host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"}
	I0425 20:03:30.804562   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserved static IP address: 192.168.61.136
	I0425 20:03:30.804582   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting for SSH to be available...
	I0425 20:03:30.804599   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Getting to WaitForSSH function...
	I0425 20:03:30.807110   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807533   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.807556   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807706   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH client type: external
	I0425 20:03:30.807725   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa (-rw-------)
	I0425 20:03:30.807767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:30.807783   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | About to run SSH command:
	I0425 20:03:30.807815   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | exit 0
	I0425 20:03:30.935091   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:30.935445   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetConfigRaw
	I0425 20:03:30.936168   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:30.938767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939193   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.939246   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939428   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 20:03:30.939630   72712 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:30.939649   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:30.939870   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:30.942320   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.942771   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942923   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:30.943113   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943306   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943468   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:30.943640   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:30.943842   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:30.943854   72712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:31.052598   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:31.052625   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.052821   72712 buildroot.go:166] provisioning hostname "old-k8s-version-210442"
	I0425 20:03:31.052844   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.053080   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.056324   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056713   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.056745   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056885   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.057056   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057190   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057375   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.057549   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.057724   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.057742   72712 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210442 && echo "old-k8s-version-210442" | sudo tee /etc/hostname
	I0425 20:03:31.188461   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210442
	
	I0425 20:03:31.188494   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.191628   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192088   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.192117   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192332   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.192519   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192655   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192767   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.192944   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.193142   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.193167   72712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:31.317374   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:31.317402   72712 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:31.317436   72712 buildroot.go:174] setting up certificates
	I0425 20:03:31.317447   72712 provision.go:84] configureAuth start
	I0425 20:03:31.317461   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.317778   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:31.321012   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321388   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.321421   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321698   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.323976   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324326   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.324354   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324523   72712 provision.go:143] copyHostCerts
	I0425 20:03:31.324573   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:31.324584   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:31.324656   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:31.324764   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:31.324778   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:31.324807   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:31.324879   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:31.324890   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:31.324915   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:31.324978   72712 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210442 san=[127.0.0.1 192.168.61.136 localhost minikube old-k8s-version-210442]
	I0425 20:03:31.410674   72712 provision.go:177] copyRemoteCerts
	I0425 20:03:31.410728   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:31.410755   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.413170   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413449   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.413491   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413634   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.413832   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.413988   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.414156   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:31.502759   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:31.536662   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0425 20:03:31.565106   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:31.593254   72712 provision.go:87] duration metric: took 275.793443ms to configureAuth
	I0425 20:03:31.593287   72712 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:31.593621   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 20:03:31.593720   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.596515   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.596827   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.596859   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.597057   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.597287   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597448   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597624   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.597775   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.597927   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.597942   72712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:31.925149   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:31.925182   72712 machine.go:97] duration metric: took 985.540626ms to provisionDockerMachine
	I0425 20:03:31.925199   72712 start.go:293] postStartSetup for "old-k8s-version-210442" (driver="kvm2")
	I0425 20:03:31.925211   72712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:31.925258   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:31.925560   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:31.925596   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.928532   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.928982   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.929013   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.929232   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.929458   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.929637   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.929787   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.023009   72712 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:32.029391   72712 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:32.029426   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:32.029508   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:32.029576   72712 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:32.029664   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:32.046596   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:32.077323   72712 start.go:296] duration metric: took 152.112632ms for postStartSetup
	I0425 20:03:32.077396   72712 fix.go:56] duration metric: took 20.821829703s for fixHost
	I0425 20:03:32.077425   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.080136   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080477   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.080526   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080636   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.080836   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081067   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081283   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.081493   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:32.081695   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:32.081711   72712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:32.187617   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075412.163072845
	
	I0425 20:03:32.187642   72712 fix.go:216] guest clock: 1714075412.163072845
	I0425 20:03:32.187652   72712 fix.go:229] Guest: 2024-04-25 20:03:32.163072845 +0000 UTC Remote: 2024-04-25 20:03:32.07740605 +0000 UTC m=+254.767943919 (delta=85.666795ms)
	I0425 20:03:32.187675   72712 fix.go:200] guest clock delta is within tolerance: 85.666795ms
	I0425 20:03:32.187682   72712 start.go:83] releasing machines lock for "old-k8s-version-210442", held for 20.932154384s
	I0425 20:03:32.187709   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.187998   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:32.190538   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.190898   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.190932   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.191077   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191817   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191996   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.192076   72712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:32.192116   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.192208   72712 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:32.192230   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.194821   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.194988   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195191   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195212   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195334   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195368   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195500   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195673   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195677   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195847   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195866   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196063   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.196083   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196219   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.276462   72712 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:32.300979   72712 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:30.842282   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:32.843750   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.843779   72220 pod_ready.go:81] duration metric: took 8.508343704s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.843791   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850293   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.850316   72220 pod_ready.go:81] duration metric: took 6.517764ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850327   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855621   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.855657   72220 pod_ready.go:81] duration metric: took 5.31225ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855671   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860450   72220 pod_ready.go:92] pod "kube-proxy-whkbk" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.860483   72220 pod_ready.go:81] duration metric: took 4.797706ms for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860505   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865268   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.865286   72220 pod_ready.go:81] duration metric: took 4.774354ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865294   72220 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.458446   72712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:32.465434   72712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:32.465518   72712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:32.486929   72712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:32.486954   72712 start.go:494] detecting cgroup driver to use...
	I0425 20:03:32.487019   72712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:32.509425   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:32.530999   72712 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:32.531059   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:32.547280   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:32.563594   72712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:32.699207   72712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:32.875013   72712 docker.go:233] disabling docker service ...
	I0425 20:03:32.875096   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:32.897149   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:32.916105   72712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:33.071143   72712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:33.231529   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:33.252919   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:33.277388   72712 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0425 20:03:33.277457   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.290889   72712 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:33.290953   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.305488   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.319263   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.332961   72712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:33.354086   72712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:33.373431   72712 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:33.373517   72712 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:33.398458   72712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:33.418683   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:33.595555   72712 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:33.808015   72712 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:33.810391   72712 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:33.817593   72712 start.go:562] Will wait 60s for crictl version
	I0425 20:03:33.817646   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:33.823381   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:33.866310   72712 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:33.866411   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.905561   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.952764   72712 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0425 20:03:32.214679   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Start
	I0425 20:03:32.214880   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring networks are active...
	I0425 20:03:32.215746   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network default is active
	I0425 20:03:32.216106   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network mk-embed-certs-512173 is active
	I0425 20:03:32.216566   71966 main.go:141] libmachine: (embed-certs-512173) Getting domain xml...
	I0425 20:03:32.217397   71966 main.go:141] libmachine: (embed-certs-512173) Creating domain...
	I0425 20:03:33.554665   71966 main.go:141] libmachine: (embed-certs-512173) Waiting to get IP...
	I0425 20:03:33.555670   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.556123   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.556186   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.556089   73884 retry.go:31] will retry after 278.996701ms: waiting for machine to come up
	I0425 20:03:33.836750   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.837273   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.837301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.837244   73884 retry.go:31] will retry after 324.410317ms: waiting for machine to come up
	I0425 20:03:34.163017   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.163490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.163518   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.163457   73884 retry.go:31] will retry after 403.985826ms: waiting for machine to come up
	I0425 20:03:34.568824   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.569364   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.569397   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.569330   73884 retry.go:31] will retry after 427.12179ms: waiting for machine to come up
	I0425 20:03:34.998092   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.998684   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.998709   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.998646   73884 retry.go:31] will retry after 710.71475ms: waiting for machine to come up
	I0425 20:03:35.710643   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:35.711707   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:35.711736   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:35.711616   73884 retry.go:31] will retry after 806.283051ms: waiting for machine to come up
	I0425 20:03:31.803034   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:33.813548   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:35.304283   72304 node_ready.go:49] node "default-k8s-diff-port-142196" has status "Ready":"True"
	I0425 20:03:35.304311   72304 node_ready.go:38] duration metric: took 5.505123781s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:35.304323   72304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:35.311480   72304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320910   72304 pod_ready.go:92] pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:35.320938   72304 pod_ready.go:81] duration metric: took 9.425507ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320953   72304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:33.954161   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:33.957316   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.957778   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:33.957811   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.958080   72712 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:33.964467   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:33.984277   72712 kubeadm.go:877] updating cluster {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:33.984437   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 20:03:33.984499   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:34.049402   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:34.049479   72712 ssh_runner.go:195] Run: which lz4
	I0425 20:03:34.055519   72712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:34.061481   72712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:34.061522   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0425 20:03:36.271646   72712 crio.go:462] duration metric: took 2.216165414s to copy over tarball
	I0425 20:03:36.271722   72712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:34.877483   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:37.373822   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:36.519514   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:36.520052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:36.520085   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:36.519968   73884 retry.go:31] will retry after 990.986618ms: waiting for machine to come up
	I0425 20:03:37.513151   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:37.513636   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:37.513669   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:37.513574   73884 retry.go:31] will retry after 1.371471682s: waiting for machine to come up
	I0425 20:03:38.886926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:38.887491   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:38.887527   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:38.887415   73884 retry.go:31] will retry after 1.633505345s: waiting for machine to come up
	I0425 20:03:40.523438   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:40.523975   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:40.524004   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:40.523926   73884 retry.go:31] will retry after 2.280577933s: waiting for machine to come up
	I0425 20:03:37.330040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.350040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.894331   72712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.622580176s)
	I0425 20:03:39.894364   72712 crio.go:469] duration metric: took 3.62268463s to extract the tarball
	I0425 20:03:39.894373   72712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:39.965071   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:40.009534   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:40.009561   72712 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:03:40.009629   72712 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.009651   72712 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.009677   72712 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.009662   72712 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.009794   72712 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.009920   72712 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.010033   72712 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.010241   72712 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0425 20:03:40.011305   72712 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.011334   72712 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.011346   72712 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.011686   72712 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.012422   72712 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.012429   72712 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.012437   72712 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0425 20:03:40.012546   72712 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.143545   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.155203   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.157842   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.158081   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.161210   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.166515   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.181859   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0425 20:03:40.301699   72712 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0425 20:03:40.301759   72712 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.301805   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.379386   72712 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0425 20:03:40.379445   72712 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.379490   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406119   72712 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0425 20:03:40.406231   72712 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.406174   72712 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0425 20:03:40.406338   72712 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.406365   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406389   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420450   72712 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0425 20:03:40.420495   72712 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.420548   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420461   72712 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0425 20:03:40.420629   72712 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.420677   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430055   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.430110   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.430232   72712 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0425 20:03:40.430263   72712 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0425 20:03:40.430274   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.430277   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.430303   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430326   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.430389   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.582980   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0425 20:03:40.583094   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0425 20:03:40.587500   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0425 20:03:40.587564   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0425 20:03:40.587579   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0425 20:03:40.587650   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0425 20:03:40.587697   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0425 20:03:40.625942   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0425 20:03:40.941957   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:41.096086   72712 cache_images.go:92] duration metric: took 1.086507707s to LoadCachedImages
	W0425 20:03:41.096249   72712 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0425 20:03:41.096279   72712 kubeadm.go:928] updating node { 192.168.61.136 8443 v1.20.0 crio true true} ...
	I0425 20:03:41.096415   72712 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210442 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:41.096509   72712 ssh_runner.go:195] Run: crio config
	I0425 20:03:41.169311   72712 cni.go:84] Creating CNI manager for ""
	I0425 20:03:41.169341   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:41.169357   72712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:41.169397   72712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210442 NodeName:old-k8s-version-210442 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0425 20:03:41.169570   72712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210442"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:41.169639   72712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0425 20:03:41.182191   72712 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:41.182283   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:41.193546   72712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0425 20:03:41.218220   72712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:41.238647   72712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0425 20:03:41.259040   72712 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:41.263603   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:41.278007   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:41.425587   72712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:41.450990   72712 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442 for IP: 192.168.61.136
	I0425 20:03:41.451013   72712 certs.go:194] generating shared ca certs ...
	I0425 20:03:41.451034   72712 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:41.451225   72712 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:41.451307   72712 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:41.451323   72712 certs.go:256] generating profile certs ...
	I0425 20:03:41.451449   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.key
	I0425 20:03:41.451528   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key.1533c9ac
	I0425 20:03:41.451587   72712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key
	I0425 20:03:41.451789   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:41.451860   72712 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:41.451880   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:41.451915   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:41.451945   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:41.451968   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:41.452023   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:41.452870   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:41.510467   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:41.555595   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:41.606059   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:41.648206   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0425 20:03:41.690090   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:41.727674   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:41.766537   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:41.799524   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:41.828668   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:41.860964   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:41.890272   72712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:41.911787   72712 ssh_runner.go:195] Run: openssl version
	I0425 20:03:41.918926   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:41.933194   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.938995   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.939060   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.945934   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:41.959859   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:41.974906   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.980931   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.981006   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.987789   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:42.002455   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:42.016797   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023789   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023853   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.033189   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:42.047467   72712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:42.053552   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:42.063130   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:42.070290   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:42.079527   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:42.087983   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:42.096658   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:42.103477   72712 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:42.103596   72712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:42.103649   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.155980   72712 cri.go:89] found id: ""
	I0425 20:03:42.156085   72712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:42.172499   72712 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:42.172525   72712 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:42.172532   72712 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:42.172580   72712 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:42.187864   72712 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:42.188948   72712 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210442" does not appear in /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:42.189659   72712 kubeconfig.go:62] /home/jenkins/minikube-integration/18757-6355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210442" cluster setting kubeconfig missing "old-k8s-version-210442" context setting]
	I0425 20:03:42.190635   72712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:42.192402   72712 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:42.207284   72712 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.136
	I0425 20:03:42.207318   72712 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:42.207329   72712 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:42.207403   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.251184   72712 cri.go:89] found id: ""
	I0425 20:03:42.251257   72712 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:42.271727   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:42.289161   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:42.289184   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:42.289237   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:42.302492   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:42.302588   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:42.317790   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:42.329940   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:42.330002   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:42.342772   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:39.375028   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:41.871821   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.805640   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:42.806121   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:42.806148   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:42.806072   73884 retry.go:31] will retry after 2.588054599s: waiting for machine to come up
	I0425 20:03:45.395282   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:45.395712   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:45.395759   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:45.395662   73884 retry.go:31] will retry after 3.473643777s: waiting for machine to come up
	I0425 20:03:41.329479   72304 pod_ready.go:92] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.329511   72304 pod_ready.go:81] duration metric: took 6.008549199s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.329523   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335660   72304 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.335688   72304 pod_ready.go:81] duration metric: took 6.15557ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335700   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341409   72304 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.341433   72304 pod_ready.go:81] duration metric: took 5.723469ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341446   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347145   72304 pod_ready.go:92] pod "kube-proxy-bqmtp" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.347167   72304 pod_ready.go:81] duration metric: took 5.713095ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347179   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376913   72304 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.376939   72304 pod_ready.go:81] duration metric: took 29.751827ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376951   72304 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:43.383378   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:45.884869   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.356480   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:42.357280   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:42.370403   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:42.384245   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:42.384332   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:42.398271   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:42.412361   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:42.575076   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.186458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.480114   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.594128   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.707129   72712 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:43.707221   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.207406   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.707733   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.208100   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.708041   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.207966   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.707255   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:47.207754   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:43.873747   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:46.374439   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:48.871928   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:48.872457   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:48.872490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:48.872393   73884 retry.go:31] will retry after 4.148424216s: waiting for machine to come up
	I0425 20:03:48.384599   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.883246   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:47.707730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.208213   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.707685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.207879   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.707914   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.208278   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.707691   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.207600   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.707365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:52.207931   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.872282   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.872356   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.874452   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:53.022813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023343   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has current primary IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023367   71966 main.go:141] libmachine: (embed-certs-512173) Found IP for machine: 192.168.50.7
	I0425 20:03:53.023381   71966 main.go:141] libmachine: (embed-certs-512173) Reserving static IP address...
	I0425 20:03:53.023750   71966 main.go:141] libmachine: (embed-certs-512173) Reserved static IP address: 192.168.50.7
	I0425 20:03:53.023770   71966 main.go:141] libmachine: (embed-certs-512173) Waiting for SSH to be available...
	I0425 20:03:53.023791   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.023827   71966 main.go:141] libmachine: (embed-certs-512173) DBG | skip adding static IP to network mk-embed-certs-512173 - found existing host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"}
	I0425 20:03:53.023848   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Getting to WaitForSSH function...
	I0425 20:03:53.025753   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.026132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026244   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH client type: external
	I0425 20:03:53.026268   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa (-rw-------)
	I0425 20:03:53.026301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:53.026313   71966 main.go:141] libmachine: (embed-certs-512173) DBG | About to run SSH command:
	I0425 20:03:53.026325   71966 main.go:141] libmachine: (embed-certs-512173) DBG | exit 0
	I0425 20:03:53.158487   71966 main.go:141] libmachine: (embed-certs-512173) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:53.158846   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetConfigRaw
	I0425 20:03:53.159567   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.161881   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162200   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.162257   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162492   71966 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/config.json ...
	I0425 20:03:53.162658   71966 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:53.162675   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:53.162875   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.164797   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.165140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165256   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.165402   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165561   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165659   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.165815   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.165989   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.166002   71966 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:53.283185   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:53.283219   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283455   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:03:53.283480   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283690   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.286427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.286843   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286969   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.287164   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287350   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.287641   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.287881   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.287904   71966 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-512173 && echo "embed-certs-512173" | sudo tee /etc/hostname
	I0425 20:03:53.423037   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-512173
	
	I0425 20:03:53.423067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.425749   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.426140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426329   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.426501   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426640   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426747   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.426866   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.427015   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.427083   71966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-512173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-512173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-512173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:53.553687   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:53.553715   71966 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:53.553749   71966 buildroot.go:174] setting up certificates
	I0425 20:03:53.553758   71966 provision.go:84] configureAuth start
	I0425 20:03:53.553775   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.554053   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.556655   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.556995   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.557034   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.557121   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.559341   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559692   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.559718   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559897   71966 provision.go:143] copyHostCerts
	I0425 20:03:53.559970   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:53.559984   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:53.560049   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:53.560129   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:53.560136   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:53.560155   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:53.560203   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:53.560214   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:53.560233   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:53.560278   71966 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-512173 san=[127.0.0.1 192.168.50.7 embed-certs-512173 localhost minikube]
	I0425 20:03:53.621714   71966 provision.go:177] copyRemoteCerts
	I0425 20:03:53.621777   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:53.621804   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.624556   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.624883   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.624914   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.625128   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.625324   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.625458   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.625602   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:53.715477   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:03:53.743782   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:53.771468   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:53.798701   71966 provision.go:87] duration metric: took 244.92871ms to configureAuth
	I0425 20:03:53.798726   71966 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:53.798922   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:53.798991   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.801607   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.801946   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.801972   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.802187   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.802373   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802628   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.802833   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.802986   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.803000   71966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:54.117164   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:54.117193   71966 machine.go:97] duration metric: took 954.522384ms to provisionDockerMachine
	I0425 20:03:54.117207   71966 start.go:293] postStartSetup for "embed-certs-512173" (driver="kvm2")
	I0425 20:03:54.117219   71966 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:54.117238   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.117558   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:54.117591   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.120060   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.120454   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120575   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.120761   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.120891   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.121002   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.209919   71966 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:54.215633   71966 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:54.215663   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:54.215747   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:54.215860   71966 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:54.215996   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:54.227250   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:54.257169   71966 start.go:296] duration metric: took 139.949813ms for postStartSetup
	I0425 20:03:54.257212   71966 fix.go:56] duration metric: took 22.069363419s for fixHost
	I0425 20:03:54.257237   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.260255   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260588   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.260613   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260731   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.260928   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261099   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261266   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.261447   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:54.261644   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:54.261655   71966 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:54.376222   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075434.352338373
	
	I0425 20:03:54.376245   71966 fix.go:216] guest clock: 1714075434.352338373
	I0425 20:03:54.376255   71966 fix.go:229] Guest: 2024-04-25 20:03:54.352338373 +0000 UTC Remote: 2024-04-25 20:03:54.257217658 +0000 UTC m=+368.446046405 (delta=95.120715ms)
	I0425 20:03:54.376287   71966 fix.go:200] guest clock delta is within tolerance: 95.120715ms
	I0425 20:03:54.376295   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 22.188484297s
	I0425 20:03:54.376317   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.376600   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:54.379217   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379646   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.379678   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379869   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380436   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380633   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380729   71966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:54.380779   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.380857   71966 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:54.380880   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.383698   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384081   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384283   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384471   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.384610   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.384647   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384683   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384781   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.384821   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384982   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.385131   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.385330   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.468506   71966 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:54.493995   71966 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:54.642719   71966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:54.649565   71966 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:54.649632   71966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:54.667526   71966 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:54.667546   71966 start.go:494] detecting cgroup driver to use...
	I0425 20:03:54.667596   71966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:54.685384   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:54.701852   71966 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:54.701905   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:54.718559   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:54.734874   71966 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:54.858325   71966 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:55.045158   71966 docker.go:233] disabling docker service ...
	I0425 20:03:55.045219   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:55.061668   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:55.076486   71966 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:55.207287   71966 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:55.352537   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:55.369470   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:55.392638   71966 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:55.392718   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.404590   71966 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:55.404655   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.416129   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.427176   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.438632   71966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:55.450725   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.462912   71966 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.485340   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.498134   71966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:55.508378   71966 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:55.508451   71966 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:55.523073   71966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:55.533901   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:55.666845   71966 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:55.828131   71966 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:55.828199   71966 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:55.833768   71966 start.go:562] Will wait 60s for crictl version
	I0425 20:03:55.833824   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:03:55.838000   71966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:55.881652   71966 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:55.881753   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.917675   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.953046   71966 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:52.884447   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:54.884538   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.707459   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.208241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.707431   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.207538   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.707289   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.207319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.707625   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.207562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.708324   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:57.207348   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.373713   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.374476   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:55.954484   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:55.957214   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957611   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:55.957638   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957832   71966 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:55.962420   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:55.976512   71966 kubeadm.go:877] updating cluster {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:55.976626   71966 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:55.976694   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:56.019881   71966 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:56.019942   71966 ssh_runner.go:195] Run: which lz4
	I0425 20:03:56.024524   71966 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:56.029297   71966 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:56.029339   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:57.736602   71966 crio.go:462] duration metric: took 1.712117844s to copy over tarball
	I0425 20:03:57.736666   71966 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:04:00.331696   71966 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.594977915s)
	I0425 20:04:00.331739   71966 crio.go:469] duration metric: took 2.595109768s to extract the tarball
	I0425 20:04:00.331751   71966 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:04:00.375437   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:04:00.430963   71966 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:04:00.430987   71966 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:04:00.430994   71966 kubeadm.go:928] updating node { 192.168.50.7 8443 v1.30.0 crio true true} ...
	I0425 20:04:00.431081   71966 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-512173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:04:00.431154   71966 ssh_runner.go:195] Run: crio config
	I0425 20:04:00.487082   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:00.487106   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:00.487117   71966 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:04:00.487135   71966 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-512173 NodeName:embed-certs-512173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:04:00.487306   71966 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-512173"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:04:00.487378   71966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:04:00.498819   71966 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:04:00.498881   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:04:00.509212   71966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0425 20:04:00.527703   71966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:04:00.546867   71966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0425 20:04:00.566302   71966 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0425 20:04:00.570629   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:04:00.584123   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:00.717589   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:00.743108   71966 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173 for IP: 192.168.50.7
	I0425 20:04:00.743173   71966 certs.go:194] generating shared ca certs ...
	I0425 20:04:00.743201   71966 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:00.743397   71966 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:04:00.743462   71966 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:04:00.743480   71966 certs.go:256] generating profile certs ...
	I0425 20:04:00.743644   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/client.key
	I0425 20:04:00.743729   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key.4a0c231f
	I0425 20:04:00.743789   71966 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key
	I0425 20:04:00.743964   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:04:00.744019   71966 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:04:00.744033   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:04:00.744064   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:04:00.744093   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:04:00.744117   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:04:00.744158   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:04:00.745130   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:04:00.797856   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:04:00.848631   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:56.885355   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:58.885857   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.707868   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.208319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.207410   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.707562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.208006   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.708245   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.208178   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.707239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:02.207926   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.873851   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.372919   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:00.877499   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:04:01.210716   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0425 20:04:01.239562   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:04:01.267356   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:04:01.295649   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:04:01.323739   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:04:01.350440   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:04:01.379693   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:04:01.409347   71966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:04:01.429857   71966 ssh_runner.go:195] Run: openssl version
	I0425 20:04:01.437636   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:04:01.449656   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455022   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455074   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.461442   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:04:01.473323   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:04:01.485988   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491661   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491719   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.498567   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:04:01.510983   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:04:01.523098   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528619   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528667   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.535129   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:04:01.546668   71966 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:04:01.552076   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:04:01.558928   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:04:01.566406   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:04:01.574761   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:04:01.581250   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:04:01.588506   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:04:01.594844   71966 kubeadm.go:391] StartCluster: {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:04:01.594917   71966 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:04:01.594978   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.648050   71966 cri.go:89] found id: ""
	I0425 20:04:01.648155   71966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:04:01.664291   71966 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:04:01.664318   71966 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:04:01.664325   71966 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:04:01.664387   71966 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:04:01.678686   71966 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:04:01.680096   71966 kubeconfig.go:125] found "embed-certs-512173" server: "https://192.168.50.7:8443"
	I0425 20:04:01.682375   71966 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:04:01.699073   71966 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0425 20:04:01.699109   71966 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:04:01.699122   71966 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:04:01.699190   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.744556   71966 cri.go:89] found id: ""
	I0425 20:04:01.744633   71966 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:04:01.767121   71966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:04:01.778499   71966 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:04:01.778517   71966 kubeadm.go:156] found existing configuration files:
	
	I0425 20:04:01.778575   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:04:01.789171   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:04:01.789242   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:04:01.800000   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:04:01.811015   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:04:01.811078   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:04:01.821752   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.832900   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:04:01.832962   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.844058   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:04:01.854774   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:04:01.854824   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:04:01.866086   71966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:04:01.879229   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.180778   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.971467   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.202841   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.286951   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.412260   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:04:03.412375   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.913176   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.413418   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.443763   71966 api_server.go:72] duration metric: took 1.031501246s to wait for apiserver process to appear ...
	I0425 20:04:04.443796   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:04:04.443816   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:04.444334   71966 api_server.go:269] stopped: https://192.168.50.7:8443/healthz: Get "https://192.168.50.7:8443/healthz": dial tcp 192.168.50.7:8443: connect: connection refused
	I0425 20:04:04.943937   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:01.384590   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:03.885859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.707796   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.207913   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.708267   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.207491   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.707894   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.207346   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.707801   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.208283   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.707342   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:07.208190   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.381611   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:06.875270   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.463721   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.463767   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.463785   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.479254   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.479283   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.944812   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.949683   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:07.949710   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.444237   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.451663   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.451706   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.944231   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.949165   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.949194   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.444776   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.449703   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.449732   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.943865   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.948474   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.948509   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.444040   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.448740   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:10.448781   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.944487   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.950181   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:04:10.957455   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:04:10.957479   71966 api_server.go:131] duration metric: took 6.513676295s to wait for apiserver health ...
	I0425 20:04:10.957487   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:10.957496   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:10.959196   71966 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:04:06.384595   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:08.883972   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.707466   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.207370   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.707951   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.207604   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.708057   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.207422   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.707391   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.207510   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.707828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:12.207519   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.960795   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:04:10.977005   71966 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:04:11.001393   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:04:11.021408   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:04:11.021439   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:04:11.021453   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:04:11.021466   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:04:11.021478   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:04:11.021495   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:04:11.021502   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:04:11.021513   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:04:11.021521   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:04:11.021533   71966 system_pods.go:74] duration metric: took 20.120592ms to wait for pod list to return data ...
	I0425 20:04:11.021540   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:04:11.025328   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:04:11.025360   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:04:11.025374   71966 node_conditions.go:105] duration metric: took 3.826846ms to run NodePressure ...
	I0425 20:04:11.025394   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:11.304673   71966 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309061   71966 kubeadm.go:733] kubelet initialised
	I0425 20:04:11.309082   71966 kubeadm.go:734] duration metric: took 4.385794ms waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309089   71966 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:11.314583   71966 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.319490   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319515   71966 pod_ready.go:81] duration metric: took 4.900118ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.319524   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319534   71966 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.324084   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324101   71966 pod_ready.go:81] duration metric: took 4.557199ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.324108   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324113   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.328151   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328167   71966 pod_ready.go:81] duration metric: took 4.047894ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.328174   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328184   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.404944   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.404982   71966 pod_ready.go:81] duration metric: took 76.789573ms for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.404997   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.405006   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.805191   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805221   71966 pod_ready.go:81] duration metric: took 400.202708ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.805238   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805248   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.205817   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205847   71966 pod_ready.go:81] duration metric: took 400.591033ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.205858   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205866   71966 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.605705   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605736   71966 pod_ready.go:81] duration metric: took 399.849241ms for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.605745   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605754   71966 pod_ready.go:38] duration metric: took 1.29665644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:12.605776   71966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:04:12.620368   71966 ops.go:34] apiserver oom_adj: -16
	I0425 20:04:12.620397   71966 kubeadm.go:591] duration metric: took 10.956065292s to restartPrimaryControlPlane
	I0425 20:04:12.620405   71966 kubeadm.go:393] duration metric: took 11.025567867s to StartCluster
	I0425 20:04:12.620419   71966 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.620492   71966 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:04:12.623272   71966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.623577   71966 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:04:12.625335   71966 out.go:177] * Verifying Kubernetes components...
	I0425 20:04:12.623608   71966 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:04:12.623775   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:04:12.626619   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:12.626625   71966 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-512173"
	I0425 20:04:12.626642   71966 addons.go:69] Setting metrics-server=true in profile "embed-certs-512173"
	I0425 20:04:12.626664   71966 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-512173"
	W0425 20:04:12.626674   71966 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:04:12.626681   71966 addons.go:234] Setting addon metrics-server=true in "embed-certs-512173"
	W0425 20:04:12.626690   71966 addons.go:243] addon metrics-server should already be in state true
	I0425 20:04:12.626623   71966 addons.go:69] Setting default-storageclass=true in profile "embed-certs-512173"
	I0425 20:04:12.626709   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626714   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626718   71966 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-512173"
	I0425 20:04:12.626985   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627013   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627020   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627035   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627088   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627130   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.642680   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0425 20:04:12.642798   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0425 20:04:12.642972   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0425 20:04:12.643182   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643288   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643418   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643671   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643696   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643871   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643884   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643893   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643915   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.644227   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644235   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644403   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.644431   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644819   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.644942   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.644980   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.645022   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.647992   71966 addons.go:234] Setting addon default-storageclass=true in "embed-certs-512173"
	W0425 20:04:12.648011   71966 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:04:12.648045   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.648393   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.648429   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.660989   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41421
	I0425 20:04:12.661534   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.662561   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.662592   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.662614   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0425 20:04:12.662804   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0425 20:04:12.662947   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663016   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663116   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.663173   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663515   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663547   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663585   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663604   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663882   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663920   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.664096   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.664487   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.664506   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.665031   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.667087   71966 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:04:12.668326   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:04:12.668343   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:04:12.668361   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.666460   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.669907   71966 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:04:09.373628   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:11.376301   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.671391   71966 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.671411   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:04:12.671427   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.671566   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672113   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.672132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672233   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.672353   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.672439   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.672525   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.674511   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.674926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.674951   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.675178   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.675357   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.675505   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.675662   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.683720   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0425 20:04:12.684195   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.684736   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.684755   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.685100   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.685282   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.687009   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.687257   71966 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.687277   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:04:12.687325   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.689958   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690356   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.690374   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690446   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.690655   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.690841   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.690989   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.846840   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:12.865045   71966 node_ready.go:35] waiting up to 6m0s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:12.938848   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:04:12.938875   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:04:12.941038   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.959316   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.977813   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:04:12.977841   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:04:13.050586   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:13.050610   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:04:13.111207   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:14.253195   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.31212607s)
	I0425 20:04:14.253252   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253247   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.293897647s)
	I0425 20:04:14.253268   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253303   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253371   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253625   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253641   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253650   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253656   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253677   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253690   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253699   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253711   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253876   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254099   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253911   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253949   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253977   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254193   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.260565   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.260584   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.260830   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.260850   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.342979   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.231720554s)
	I0425 20:04:14.343042   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343349   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.343358   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343374   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343390   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343398   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343602   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343623   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343633   71966 addons.go:470] Verifying addon metrics-server=true in "embed-certs-512173"
	I0425 20:04:14.346631   71966 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:04:14.347936   71966 addons.go:505] duration metric: took 1.724328435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:04:14.869074   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.383960   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:13.384840   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.883656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.707816   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.207561   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.708264   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.207822   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.707509   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.207507   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.707899   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.208254   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.708246   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:17.207508   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.873212   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.873263   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:18.373183   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:16.870001   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:18.368960   71966 node_ready.go:49] node "embed-certs-512173" has status "Ready":"True"
	I0425 20:04:18.368991   71966 node_ready.go:38] duration metric: took 5.503919958s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:18.369003   71966 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:18.375440   71966 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380902   71966 pod_ready.go:92] pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.380920   71966 pod_ready.go:81] duration metric: took 5.456921ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380928   71966 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386330   71966 pod_ready.go:92] pod "etcd-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.386386   71966 pod_ready.go:81] duration metric: took 5.451019ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386402   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391115   71966 pod_ready.go:92] pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.391138   71966 pod_ready.go:81] duration metric: took 4.727835ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391149   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:20.398316   71966 pod_ready.go:102] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.885191   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:20.384439   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.707948   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.207953   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.707659   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.207609   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.707567   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.207989   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.707938   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.208305   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.707827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:22.207940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.374376   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.873180   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.899221   71966 pod_ready.go:92] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.899240   71966 pod_ready.go:81] duration metric: took 4.508083804s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.899250   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904904   71966 pod_ready.go:92] pod "kube-proxy-8247p" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.904922   71966 pod_ready.go:81] duration metric: took 5.665557ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904929   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910035   71966 pod_ready.go:92] pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.910051   71966 pod_ready.go:81] duration metric: took 5.116298ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910059   71966 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:24.919233   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.884480   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:25.384287   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.707381   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.207532   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.707461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.208239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.707742   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.208365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.707323   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.207485   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.707727   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:27.208332   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.373538   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.872428   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.420297   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.918808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.385722   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.883321   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.707275   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.207776   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.708096   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.207685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.708249   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.207647   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.707943   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.207471   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.707902   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:32.207582   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.872576   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.372818   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.416593   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:34.416976   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:31.884120   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:33.885341   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:35.886190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.708066   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.208090   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.707474   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.207664   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.708110   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.208160   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.707940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.207505   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.708334   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:37.207939   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.375813   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.873166   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.417945   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.916796   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.384530   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.384673   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:37.707256   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.207621   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.708237   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.208327   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.707542   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.207371   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.708300   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.207577   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.708097   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:42.207684   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.876272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:41.372217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.918223   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:43.420086   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.389390   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:44.885243   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.708257   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.207407   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.707548   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:43.707618   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:43.753656   72712 cri.go:89] found id: ""
	I0425 20:04:43.753686   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.753698   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:43.753706   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:43.753770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:43.797957   72712 cri.go:89] found id: ""
	I0425 20:04:43.797982   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.797991   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:43.797996   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:43.798051   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:43.836700   72712 cri.go:89] found id: ""
	I0425 20:04:43.836729   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.836737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:43.836742   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:43.836795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:43.883452   72712 cri.go:89] found id: ""
	I0425 20:04:43.883478   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.883486   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:43.883492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:43.883544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:43.929975   72712 cri.go:89] found id: ""
	I0425 20:04:43.930004   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.930014   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:43.930022   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:43.930089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:43.967648   72712 cri.go:89] found id: ""
	I0425 20:04:43.967681   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.967693   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:43.967701   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:43.967758   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:44.011024   72712 cri.go:89] found id: ""
	I0425 20:04:44.011048   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.011072   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:44.011078   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:44.011129   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:44.050233   72712 cri.go:89] found id: ""
	I0425 20:04:44.050263   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.050274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:44.050286   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:44.050302   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:44.196275   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:44.196307   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:44.196323   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:44.260707   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:44.260748   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:44.306051   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:44.306090   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:44.357643   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:44.357682   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:46.875982   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:46.890987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:46.891062   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:46.935855   72712 cri.go:89] found id: ""
	I0425 20:04:46.935878   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.935885   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:46.935891   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:46.935948   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:46.978634   72712 cri.go:89] found id: ""
	I0425 20:04:46.978662   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.978674   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:46.978681   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:46.978749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:47.019845   72712 cri.go:89] found id: ""
	I0425 20:04:47.019864   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.019872   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:47.019877   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:47.019933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:47.065002   72712 cri.go:89] found id: ""
	I0425 20:04:47.065040   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.065064   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:47.065072   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:47.065139   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:47.106370   72712 cri.go:89] found id: ""
	I0425 20:04:47.106404   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.106416   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:47.106423   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:47.106483   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:47.143851   72712 cri.go:89] found id: ""
	I0425 20:04:47.143874   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.143883   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:47.143888   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:47.143932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:47.186130   72712 cri.go:89] found id: ""
	I0425 20:04:47.186160   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.186168   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:47.186174   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:47.186238   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:47.228959   72712 cri.go:89] found id: ""
	I0425 20:04:47.228984   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.228992   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:47.229000   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:47.229010   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:47.299852   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:47.299893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:47.346078   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:47.346111   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:43.872670   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:46.373259   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:45.917948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.919494   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:50.420952   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.388353   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:49.884300   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.405897   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:47.405932   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:47.424426   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:47.424455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:47.506603   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.007697   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:50.023258   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:50.023333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:50.066794   72712 cri.go:89] found id: ""
	I0425 20:04:50.066827   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.066836   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:50.066842   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:50.066913   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:50.109167   72712 cri.go:89] found id: ""
	I0425 20:04:50.109200   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.109212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:50.109219   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:50.109306   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:50.151854   72712 cri.go:89] found id: ""
	I0425 20:04:50.151878   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.151886   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:50.151892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:50.151940   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:50.190600   72712 cri.go:89] found id: ""
	I0425 20:04:50.190632   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.190644   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:50.190672   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:50.190742   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:50.232851   72712 cri.go:89] found id: ""
	I0425 20:04:50.232874   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.232883   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:50.232889   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:50.232935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:50.274941   72712 cri.go:89] found id: ""
	I0425 20:04:50.274971   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.274983   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:50.274990   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:50.275069   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:50.320954   72712 cri.go:89] found id: ""
	I0425 20:04:50.320981   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.320992   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:50.320999   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:50.321068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:50.361799   72712 cri.go:89] found id: ""
	I0425 20:04:50.361829   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.361839   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:50.361847   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:50.361858   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:50.457792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.457819   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:50.457834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:50.539653   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:50.539702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:50.598740   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:50.598774   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:50.650501   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:50.650533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:48.872490   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.374484   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:52.919420   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:55.420126   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.887536   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:54.389174   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:53.167827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:53.183324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:53.183403   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:53.227598   72712 cri.go:89] found id: ""
	I0425 20:04:53.227641   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.227650   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:53.227655   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:53.227700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:53.271170   72712 cri.go:89] found id: ""
	I0425 20:04:53.271200   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.271212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:53.271220   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:53.271304   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:53.318185   72712 cri.go:89] found id: ""
	I0425 20:04:53.318233   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.318246   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:53.318255   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:53.318324   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:53.372199   72712 cri.go:89] found id: ""
	I0425 20:04:53.372228   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.372238   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:53.372244   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:53.372367   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:53.414048   72712 cri.go:89] found id: ""
	I0425 20:04:53.414080   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.414091   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:53.414099   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:53.414170   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:53.455746   72712 cri.go:89] found id: ""
	I0425 20:04:53.455806   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.455819   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:53.455827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:53.455901   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:53.497969   72712 cri.go:89] found id: ""
	I0425 20:04:53.497996   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.498004   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:53.498011   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:53.498057   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:53.543642   72712 cri.go:89] found id: ""
	I0425 20:04:53.543668   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.543675   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:53.543684   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:53.543693   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:53.596106   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:53.596144   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:53.612755   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:53.612787   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:53.693068   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:53.693089   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:53.693102   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:53.771499   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:53.771535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:56.322663   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:56.336866   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:56.336945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:56.375515   72712 cri.go:89] found id: ""
	I0425 20:04:56.375556   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.375567   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:56.375574   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:56.375641   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:56.423230   72712 cri.go:89] found id: ""
	I0425 20:04:56.423261   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.423273   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:56.423281   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:56.423366   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:56.467786   72712 cri.go:89] found id: ""
	I0425 20:04:56.467814   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.467835   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:56.467842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:56.467895   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:56.517671   72712 cri.go:89] found id: ""
	I0425 20:04:56.517696   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.517708   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:56.517715   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:56.517770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:56.558622   72712 cri.go:89] found id: ""
	I0425 20:04:56.558651   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.558662   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:56.558669   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:56.558746   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:56.601350   72712 cri.go:89] found id: ""
	I0425 20:04:56.601374   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.601382   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:56.601387   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:56.601444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:56.645892   72712 cri.go:89] found id: ""
	I0425 20:04:56.645923   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.645934   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:56.645940   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:56.646001   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:56.691619   72712 cri.go:89] found id: ""
	I0425 20:04:56.691645   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.691656   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:56.691665   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:56.691679   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:56.744854   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:56.744891   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:56.762523   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:56.762556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:56.843396   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:56.843422   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:56.843437   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:56.933785   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:56.933825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:53.872514   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.372956   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:58.373649   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:57.917208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.920979   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.884907   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.385506   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.481512   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:59.497510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:59.497588   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:59.547382   72712 cri.go:89] found id: ""
	I0425 20:04:59.547412   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.547423   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:59.547432   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:59.547486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:59.597671   72712 cri.go:89] found id: ""
	I0425 20:04:59.597699   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.597711   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:59.597717   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:59.597762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:59.641455   72712 cri.go:89] found id: ""
	I0425 20:04:59.641486   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.641497   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:59.641503   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:59.641613   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:59.685052   72712 cri.go:89] found id: ""
	I0425 20:04:59.685092   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.685104   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:59.685112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:59.685173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:59.735912   72712 cri.go:89] found id: ""
	I0425 20:04:59.735943   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.735951   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:59.735957   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:59.736025   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:59.799294   72712 cri.go:89] found id: ""
	I0425 20:04:59.799322   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.799332   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:59.799338   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:59.799395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:59.871270   72712 cri.go:89] found id: ""
	I0425 20:04:59.871297   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.871308   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:59.871315   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:59.871377   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:59.919001   72712 cri.go:89] found id: ""
	I0425 20:04:59.919091   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.919110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:59.919120   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:59.919135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:59.973458   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:59.973498   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:59.989729   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:59.989757   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:00.072887   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:00.072911   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:00.072926   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:00.153886   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:00.153921   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:00.873812   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.372969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.417960   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:04.420353   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:01.885238   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.887277   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:02.722771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:02.722831   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:02.770101   72712 cri.go:89] found id: ""
	I0425 20:05:02.770134   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.770147   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:02.770154   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:02.770224   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:02.817819   72712 cri.go:89] found id: ""
	I0425 20:05:02.817854   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.817865   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:02.817898   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:02.817963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:02.857036   72712 cri.go:89] found id: ""
	I0425 20:05:02.857066   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.857077   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:02.857085   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:02.857144   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:02.900112   72712 cri.go:89] found id: ""
	I0425 20:05:02.900145   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.900157   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:02.900164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:02.900221   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:02.941079   72712 cri.go:89] found id: ""
	I0425 20:05:02.941109   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.941116   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:02.941121   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:02.941198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:02.983458   72712 cri.go:89] found id: ""
	I0425 20:05:02.983490   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.983502   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:02.983510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:02.983574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:03.025424   72712 cri.go:89] found id: ""
	I0425 20:05:03.025451   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.025462   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:03.025469   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:03.025556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:03.065285   72712 cri.go:89] found id: ""
	I0425 20:05:03.065316   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.065328   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:03.065340   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:03.065351   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:03.121235   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:03.121267   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:03.138036   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:03.138073   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:03.213604   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:03.213638   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:03.213655   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:03.296696   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:03.296741   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.842642   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:05.859125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:05.859199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:05.906505   72712 cri.go:89] found id: ""
	I0425 20:05:05.906529   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.906537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:05.906542   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:05.906595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:05.950793   72712 cri.go:89] found id: ""
	I0425 20:05:05.950819   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.950831   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:05.950838   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:05.950902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:05.991612   72712 cri.go:89] found id: ""
	I0425 20:05:05.991644   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.991654   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:05.991661   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:05.991755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:06.032273   72712 cri.go:89] found id: ""
	I0425 20:05:06.032314   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.032326   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:06.032334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:06.032392   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:06.071802   72712 cri.go:89] found id: ""
	I0425 20:05:06.071833   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.071844   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:06.071852   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:06.071908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:06.116676   72712 cri.go:89] found id: ""
	I0425 20:05:06.116702   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.116710   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:06.116716   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:06.116759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:06.154720   72712 cri.go:89] found id: ""
	I0425 20:05:06.154753   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.154765   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:06.154771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:06.154842   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:06.196421   72712 cri.go:89] found id: ""
	I0425 20:05:06.196457   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.196469   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:06.196480   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:06.196493   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:06.251061   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:06.251122   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:06.267764   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:06.267799   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:06.345302   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:06.345334   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:06.345349   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:06.427836   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:06.427868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.873928   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.372014   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.422386   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.916659   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.384700   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.883611   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:10.885814   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.989442   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:09.004493   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:09.004551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:09.056062   72712 cri.go:89] found id: ""
	I0425 20:05:09.056086   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.056096   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:09.056101   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:09.056148   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:09.096791   72712 cri.go:89] found id: ""
	I0425 20:05:09.096817   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.096827   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:09.096834   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:09.096889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:09.134649   72712 cri.go:89] found id: ""
	I0425 20:05:09.134680   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.134691   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:09.134698   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:09.134757   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:09.175980   72712 cri.go:89] found id: ""
	I0425 20:05:09.176010   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.176021   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:09.176028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:09.176084   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:09.216263   72712 cri.go:89] found id: ""
	I0425 20:05:09.216299   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.216313   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:09.216325   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:09.216395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:09.260498   72712 cri.go:89] found id: ""
	I0425 20:05:09.260528   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.260538   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:09.260544   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:09.260603   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:09.303154   72712 cri.go:89] found id: ""
	I0425 20:05:09.303178   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.303201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:09.303209   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:09.303269   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:09.350798   72712 cri.go:89] found id: ""
	I0425 20:05:09.350829   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.350840   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:09.350852   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:09.350868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:09.405295   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:09.405332   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:09.422788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:09.422820   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:09.501819   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:09.501841   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:09.501855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:09.586938   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:09.586981   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:12.132731   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:12.148860   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:12.148935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:12.194021   72712 cri.go:89] found id: ""
	I0425 20:05:12.194051   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.194064   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:12.194072   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:12.194152   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:12.234680   72712 cri.go:89] found id: ""
	I0425 20:05:12.234710   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.234721   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:12.234728   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:12.234792   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:12.277751   72712 cri.go:89] found id: ""
	I0425 20:05:12.277783   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.277794   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:12.277802   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:12.277864   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:12.324068   72712 cri.go:89] found id: ""
	I0425 20:05:12.324100   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.324117   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:12.324125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:12.324187   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:10.374594   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.873217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:11.424208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.425980   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.387259   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.884337   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.366797   72712 cri.go:89] found id: ""
	I0425 20:05:12.366825   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.366837   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:12.366844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:12.366903   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:12.413092   72712 cri.go:89] found id: ""
	I0425 20:05:12.413120   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.413132   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:12.413139   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:12.413198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:12.461229   72712 cri.go:89] found id: ""
	I0425 20:05:12.461253   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.461262   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:12.461268   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:12.461333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:12.504646   72712 cri.go:89] found id: ""
	I0425 20:05:12.504669   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.504677   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:12.504685   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:12.504698   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:12.561630   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:12.561673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:12.578043   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:12.578069   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:12.655176   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:12.655195   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:12.655209   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:12.736323   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:12.736357   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.287503   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:15.302830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:15.302893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:15.339479   72712 cri.go:89] found id: ""
	I0425 20:05:15.339509   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.339519   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:15.339527   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:15.339589   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:15.381431   72712 cri.go:89] found id: ""
	I0425 20:05:15.381458   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.381467   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:15.381475   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:15.381537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:15.423729   72712 cri.go:89] found id: ""
	I0425 20:05:15.423755   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.423767   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:15.423774   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:15.423833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:15.464367   72712 cri.go:89] found id: ""
	I0425 20:05:15.464401   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.464413   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:15.464421   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:15.464489   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:15.508306   72712 cri.go:89] found id: ""
	I0425 20:05:15.508336   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.508348   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:15.508356   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:15.508419   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:15.548572   72712 cri.go:89] found id: ""
	I0425 20:05:15.548600   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.548610   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:15.548616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:15.548678   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:15.592885   72712 cri.go:89] found id: ""
	I0425 20:05:15.592914   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.592926   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:15.592933   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:15.592992   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:15.632817   72712 cri.go:89] found id: ""
	I0425 20:05:15.632855   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.632868   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:15.632880   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:15.632900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:15.648443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:15.648470   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:15.726167   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:15.726191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:15.726229   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:15.803028   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:15.803066   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.850519   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:15.850552   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:14.873291   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:17.372118   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.917932   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.420096   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.384555   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.885930   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.404671   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:18.422600   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:18.422663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:18.476977   72712 cri.go:89] found id: ""
	I0425 20:05:18.477001   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.477009   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:18.477021   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:18.477093   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:18.525595   72712 cri.go:89] found id: ""
	I0425 20:05:18.525631   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.525641   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:18.525648   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:18.525714   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:18.565485   72712 cri.go:89] found id: ""
	I0425 20:05:18.565513   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.565523   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:18.565531   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:18.565600   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:18.612059   72712 cri.go:89] found id: ""
	I0425 20:05:18.612096   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.612106   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:18.612112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:18.612173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:18.659407   72712 cri.go:89] found id: ""
	I0425 20:05:18.659438   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.659449   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:18.659456   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:18.659507   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:18.701065   72712 cri.go:89] found id: ""
	I0425 20:05:18.701092   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.701101   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:18.701106   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:18.701201   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:18.738234   72712 cri.go:89] found id: ""
	I0425 20:05:18.738264   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.738276   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:18.738284   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:18.738343   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:18.780460   72712 cri.go:89] found id: ""
	I0425 20:05:18.780489   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.780498   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:18.780514   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:18.780526   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:18.834345   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:18.834378   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:18.850006   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:18.850033   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:18.932146   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:18.932171   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:18.932185   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:19.015036   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:19.015068   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:21.568250   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:21.582519   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:21.582595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:21.622886   72712 cri.go:89] found id: ""
	I0425 20:05:21.622913   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.622920   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:21.622925   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:21.622974   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:21.664832   72712 cri.go:89] found id: ""
	I0425 20:05:21.664860   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.664874   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:21.664882   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:21.664950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:21.703801   72712 cri.go:89] found id: ""
	I0425 20:05:21.703829   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.703843   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:21.703850   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:21.703911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:21.741502   72712 cri.go:89] found id: ""
	I0425 20:05:21.741540   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.741549   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:21.741555   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:21.741612   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:21.783715   72712 cri.go:89] found id: ""
	I0425 20:05:21.783745   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.783754   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:21.783759   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:21.783803   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:21.822806   72712 cri.go:89] found id: ""
	I0425 20:05:21.822842   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.822851   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:21.822856   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:21.822915   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:21.864996   72712 cri.go:89] found id: ""
	I0425 20:05:21.865020   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.865030   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:21.865037   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:21.865092   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:21.907533   72712 cri.go:89] found id: ""
	I0425 20:05:21.907563   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.907575   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:21.907585   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:21.907601   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:21.964226   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:21.964260   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:21.980096   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:21.980123   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:22.059516   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:22.059539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:22.059566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:22.136752   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:22.136784   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:19.373290   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:21.873377   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.916720   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:22.917156   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.918191   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:23.384566   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:25.885793   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.682139   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:24.697495   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:24.697564   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:24.739725   72712 cri.go:89] found id: ""
	I0425 20:05:24.739750   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.739760   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:24.739766   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:24.739824   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:24.777455   72712 cri.go:89] found id: ""
	I0425 20:05:24.777485   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.777497   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:24.777504   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:24.777566   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:24.821729   72712 cri.go:89] found id: ""
	I0425 20:05:24.821761   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.821774   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:24.821782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:24.821845   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:24.861745   72712 cri.go:89] found id: ""
	I0425 20:05:24.861773   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.861784   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:24.861791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:24.861851   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:24.903441   72712 cri.go:89] found id: ""
	I0425 20:05:24.903470   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.903479   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:24.903486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:24.903544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:24.943589   72712 cri.go:89] found id: ""
	I0425 20:05:24.943618   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.943629   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:24.943637   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:24.943717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:24.983629   72712 cri.go:89] found id: ""
	I0425 20:05:24.983661   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.983672   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:24.983680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:24.983739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:25.022413   72712 cri.go:89] found id: ""
	I0425 20:05:25.022441   72712 logs.go:276] 0 containers: []
	W0425 20:05:25.022451   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:25.022462   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:25.022477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:25.077402   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:25.077438   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:25.094488   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:25.094517   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:25.171485   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:25.171515   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:25.171535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:25.251131   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:25.251166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:24.373762   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:26.873969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.420395   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:29.420994   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:28.384247   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:30.883795   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.797359   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:27.813601   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:27.813659   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:27.854017   72712 cri.go:89] found id: ""
	I0425 20:05:27.854051   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.854061   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:27.854066   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:27.854117   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:27.900425   72712 cri.go:89] found id: ""
	I0425 20:05:27.900451   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.900461   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:27.900468   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:27.900531   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:27.940064   72712 cri.go:89] found id: ""
	I0425 20:05:27.940096   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.940107   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:27.940114   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:27.940174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:27.979363   72712 cri.go:89] found id: ""
	I0425 20:05:27.979385   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.979393   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:27.979399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:27.979442   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:28.019702   72712 cri.go:89] found id: ""
	I0425 20:05:28.019723   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.019731   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:28.019736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:28.019798   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:28.058711   72712 cri.go:89] found id: ""
	I0425 20:05:28.058740   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.058748   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:28.058755   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:28.058810   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:28.104465   72712 cri.go:89] found id: ""
	I0425 20:05:28.104495   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.104507   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:28.104515   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:28.104577   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:28.142399   72712 cri.go:89] found id: ""
	I0425 20:05:28.142431   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.142440   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:28.142449   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:28.142460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:28.222763   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:28.222786   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:28.222801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:28.299797   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:28.299838   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:28.366569   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:28.366594   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:28.424581   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:28.424628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:30.942526   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:30.957400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:30.957482   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:30.996931   72712 cri.go:89] found id: ""
	I0425 20:05:30.996958   72712 logs.go:276] 0 containers: []
	W0425 20:05:30.996967   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:30.996974   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:30.997029   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:31.035673   72712 cri.go:89] found id: ""
	I0425 20:05:31.035700   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.035712   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:31.035719   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:31.035782   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:31.075783   72712 cri.go:89] found id: ""
	I0425 20:05:31.075809   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.075820   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:31.075826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:31.075886   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:31.114229   72712 cri.go:89] found id: ""
	I0425 20:05:31.114257   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.114267   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:31.114274   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:31.114333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:31.155385   72712 cri.go:89] found id: ""
	I0425 20:05:31.155409   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.155419   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:31.155427   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:31.155486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:31.193772   72712 cri.go:89] found id: ""
	I0425 20:05:31.193804   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.193815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:31.193823   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:31.193878   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:31.233886   72712 cri.go:89] found id: ""
	I0425 20:05:31.233909   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.233917   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:31.233923   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:31.233967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:31.273427   72712 cri.go:89] found id: ""
	I0425 20:05:31.273455   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.273465   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:31.273476   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:31.273491   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:31.354429   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:31.354462   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:31.406018   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:31.406047   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:31.460972   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:31.461007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:31.477485   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:31.477513   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:31.551616   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:29.371357   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.373007   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.421948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.424866   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.384577   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.884780   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:34.052808   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:34.068068   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:34.068158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:34.120984   72712 cri.go:89] found id: ""
	I0425 20:05:34.121016   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.121024   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:34.121032   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:34.121082   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:34.160646   72712 cri.go:89] found id: ""
	I0425 20:05:34.160676   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.160687   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:34.160694   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:34.160752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:34.202641   72712 cri.go:89] found id: ""
	I0425 20:05:34.202665   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.202671   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:34.202677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:34.202733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:34.244352   72712 cri.go:89] found id: ""
	I0425 20:05:34.244379   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.244391   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:34.244400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:34.244460   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:34.285858   72712 cri.go:89] found id: ""
	I0425 20:05:34.285885   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.285896   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:34.285904   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:34.285956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:34.323634   72712 cri.go:89] found id: ""
	I0425 20:05:34.323662   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.323673   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:34.323681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:34.323739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:34.365230   72712 cri.go:89] found id: ""
	I0425 20:05:34.365256   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.365272   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:34.365280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:34.365339   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:34.409329   72712 cri.go:89] found id: ""
	I0425 20:05:34.409354   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.409365   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:34.409376   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:34.409390   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:34.464575   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:34.464606   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:34.480244   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:34.480270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:34.560204   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:34.560224   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:34.560236   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:34.640152   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:34.640187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:37.189992   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:37.204683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:37.204786   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:37.245857   72712 cri.go:89] found id: ""
	I0425 20:05:37.245891   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.245903   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:37.245910   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:37.245969   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:37.284668   72712 cri.go:89] found id: ""
	I0425 20:05:37.284696   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.284704   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:37.284710   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:37.284762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:37.324349   72712 cri.go:89] found id: ""
	I0425 20:05:37.324379   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.324391   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:37.324399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:37.324461   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:33.872836   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.873214   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.373278   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.917308   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.419746   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.383933   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.385166   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:37.361764   72712 cri.go:89] found id: ""
	I0425 20:05:37.361787   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.361800   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:37.361811   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:37.361857   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:37.404331   72712 cri.go:89] found id: ""
	I0425 20:05:37.404353   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.404360   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:37.404366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:37.404430   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:37.445284   72712 cri.go:89] found id: ""
	I0425 20:05:37.445316   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.445327   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:37.445334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:37.445395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:37.483806   72712 cri.go:89] found id: ""
	I0425 20:05:37.483828   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.483837   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:37.483843   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:37.483888   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:37.524649   72712 cri.go:89] found id: ""
	I0425 20:05:37.524673   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.524680   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:37.524689   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:37.524701   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:37.581521   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:37.581553   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:37.598459   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:37.598487   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:37.671236   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:37.671256   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:37.671272   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:37.750517   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:37.750556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.293743   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:40.310344   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:40.310426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:40.356157   72712 cri.go:89] found id: ""
	I0425 20:05:40.356198   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.356208   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:40.356215   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:40.356277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:40.397857   72712 cri.go:89] found id: ""
	I0425 20:05:40.397886   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.397895   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:40.397902   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:40.397964   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:40.445034   72712 cri.go:89] found id: ""
	I0425 20:05:40.445057   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.445065   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:40.445071   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:40.445126   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:40.493744   72712 cri.go:89] found id: ""
	I0425 20:05:40.493773   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.493783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:40.493797   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:40.493856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:40.550546   72712 cri.go:89] found id: ""
	I0425 20:05:40.550572   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.550580   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:40.550587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:40.550654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:40.605122   72712 cri.go:89] found id: ""
	I0425 20:05:40.605153   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.605164   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:40.605172   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:40.605232   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:40.675713   72712 cri.go:89] found id: ""
	I0425 20:05:40.675745   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.675755   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:40.675769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:40.675828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:40.716064   72712 cri.go:89] found id: ""
	I0425 20:05:40.716093   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.716101   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:40.716109   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:40.716120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:40.781395   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:40.781441   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:40.797597   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:40.797628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:40.880931   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:40.880956   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:40.880971   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:40.970770   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:40.970800   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.373398   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.873163   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.918560   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.417610   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:45.420963   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.883556   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:44.883719   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.520389   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:43.537668   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:43.537729   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:43.578137   72712 cri.go:89] found id: ""
	I0425 20:05:43.578166   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.578175   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:43.578180   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:43.578247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:43.617428   72712 cri.go:89] found id: ""
	I0425 20:05:43.617454   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.617462   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:43.617466   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:43.617519   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:43.655401   72712 cri.go:89] found id: ""
	I0425 20:05:43.655431   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.655443   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:43.655450   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:43.655514   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:43.695183   72712 cri.go:89] found id: ""
	I0425 20:05:43.695212   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.695230   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:43.695238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:43.695316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:43.735056   72712 cri.go:89] found id: ""
	I0425 20:05:43.735086   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.735098   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:43.735104   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:43.735162   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:43.774761   72712 cri.go:89] found id: ""
	I0425 20:05:43.774789   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.774799   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:43.774830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:43.774889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:43.819102   72712 cri.go:89] found id: ""
	I0425 20:05:43.819128   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.819138   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:43.819146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:43.819206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:43.858235   72712 cri.go:89] found id: ""
	I0425 20:05:43.858267   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.858278   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:43.858289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:43.858303   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:43.940756   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:43.940794   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:43.985878   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:43.985925   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:44.040177   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:44.040207   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:44.055912   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:44.055942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:44.143724   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:46.643923   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:46.658863   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:46.658941   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:46.697826   72712 cri.go:89] found id: ""
	I0425 20:05:46.697850   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.697858   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:46.697884   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:46.697947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:46.739850   72712 cri.go:89] found id: ""
	I0425 20:05:46.739877   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.739888   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:46.739897   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:46.739955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:46.781212   72712 cri.go:89] found id: ""
	I0425 20:05:46.781241   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.781256   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:46.781262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:46.781321   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:46.826005   72712 cri.go:89] found id: ""
	I0425 20:05:46.826036   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.826047   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:46.826055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:46.826109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:46.865428   72712 cri.go:89] found id: ""
	I0425 20:05:46.865456   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.865465   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:46.865472   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:46.865522   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:46.914860   72712 cri.go:89] found id: ""
	I0425 20:05:46.914887   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.914897   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:46.914907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:46.914968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:46.955323   72712 cri.go:89] found id: ""
	I0425 20:05:46.955355   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.955365   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:46.955373   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:46.955436   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:46.999369   72712 cri.go:89] found id: ""
	I0425 20:05:46.999396   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.999408   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:46.999419   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:46.999464   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:47.013865   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:47.013893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:47.094725   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:47.094755   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:47.094771   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:47.178380   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:47.178426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:47.227217   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:47.227249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:45.375272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.872640   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.917579   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.918001   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:46.884746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:48.884818   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.780217   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:49.795690   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:49.795760   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:49.834909   72712 cri.go:89] found id: ""
	I0425 20:05:49.834935   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.834943   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:49.834951   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:49.835004   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:49.872717   72712 cri.go:89] found id: ""
	I0425 20:05:49.872747   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.872755   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:49.872762   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:49.872807   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:49.919348   72712 cri.go:89] found id: ""
	I0425 20:05:49.919376   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.919387   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:49.919395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:49.919465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:49.959673   72712 cri.go:89] found id: ""
	I0425 20:05:49.959705   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.959716   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:49.959728   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:49.959796   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:49.999276   72712 cri.go:89] found id: ""
	I0425 20:05:49.999299   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.999306   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:49.999312   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:49.999361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:50.037426   72712 cri.go:89] found id: ""
	I0425 20:05:50.037454   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.037461   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:50.037466   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:50.037510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:50.080666   72712 cri.go:89] found id: ""
	I0425 20:05:50.080695   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.080703   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:50.080719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:50.080776   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:50.126065   72712 cri.go:89] found id: ""
	I0425 20:05:50.126111   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.126123   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:50.126134   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:50.126148   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:50.140778   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:50.140805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:50.213282   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:50.213308   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:50.213320   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:50.293798   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:50.293832   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:50.336823   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:50.336859   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:49.873685   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.372830   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.919781   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:54.417518   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.382698   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:53.392894   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:55.884231   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.892579   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:52.909556   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:52.909629   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:52.948098   72712 cri.go:89] found id: ""
	I0425 20:05:52.948127   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.948138   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:52.948146   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:52.948206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:52.988813   72712 cri.go:89] found id: ""
	I0425 20:05:52.988840   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.988848   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:52.988853   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:52.988898   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:53.032181   72712 cri.go:89] found id: ""
	I0425 20:05:53.032211   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.032222   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:53.032230   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:53.032288   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:53.075496   72712 cri.go:89] found id: ""
	I0425 20:05:53.075528   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.075538   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:53.075543   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:53.075599   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:53.119037   72712 cri.go:89] found id: ""
	I0425 20:05:53.119070   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.119082   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:53.119095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:53.119158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:53.158276   72712 cri.go:89] found id: ""
	I0425 20:05:53.158303   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.158314   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:53.158321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:53.158381   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:53.196168   72712 cri.go:89] found id: ""
	I0425 20:05:53.196199   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.196211   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:53.196219   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:53.196277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:53.235212   72712 cri.go:89] found id: ""
	I0425 20:05:53.235235   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.235243   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:53.235250   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:53.235261   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:53.290435   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:53.290474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:53.306351   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:53.306380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:53.388623   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:53.388652   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:53.388666   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:53.480388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:53.480426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:56.027403   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:56.042683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:56.042755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:56.083672   72712 cri.go:89] found id: ""
	I0425 20:05:56.083706   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.083718   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:56.083725   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:56.083790   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:56.124071   72712 cri.go:89] found id: ""
	I0425 20:05:56.124105   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.124126   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:56.124134   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:56.124200   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:56.166692   72712 cri.go:89] found id: ""
	I0425 20:05:56.166724   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.166737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:56.166744   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:56.166808   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:56.203833   72712 cri.go:89] found id: ""
	I0425 20:05:56.203871   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.203884   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:56.203892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:56.203950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:56.242277   72712 cri.go:89] found id: ""
	I0425 20:05:56.242319   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.242341   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:56.242349   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:56.242416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:56.281697   72712 cri.go:89] found id: ""
	I0425 20:05:56.281726   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.281733   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:56.281739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:56.281812   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:56.322190   72712 cri.go:89] found id: ""
	I0425 20:05:56.322233   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.322243   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:56.322248   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:56.322310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:56.364831   72712 cri.go:89] found id: ""
	I0425 20:05:56.364853   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.364864   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:56.364875   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:56.364889   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:56.422824   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:56.422856   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:56.437619   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:56.437641   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:56.512938   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:56.512961   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:56.512977   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:56.598670   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:56.598708   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:54.872566   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.873184   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.917352   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.421645   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:58.383740   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:00.384113   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.150322   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:59.166883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:59.166956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:59.205086   72712 cri.go:89] found id: ""
	I0425 20:05:59.205112   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.205121   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:59.205126   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:59.205199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:59.253430   72712 cri.go:89] found id: ""
	I0425 20:05:59.253458   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.253469   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:59.253478   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:59.253539   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:59.293691   72712 cri.go:89] found id: ""
	I0425 20:05:59.293719   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.293731   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:59.293738   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:59.293801   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:59.331580   72712 cri.go:89] found id: ""
	I0425 20:05:59.331604   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.331613   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:59.331619   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:59.331663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:59.369985   72712 cri.go:89] found id: ""
	I0425 20:05:59.370012   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.370023   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:59.370031   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:59.370095   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:59.411636   72712 cri.go:89] found id: ""
	I0425 20:05:59.411662   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.411670   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:59.411676   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:59.411733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:59.454735   72712 cri.go:89] found id: ""
	I0425 20:05:59.454762   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.454774   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:59.454782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:59.454839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:59.497664   72712 cri.go:89] found id: ""
	I0425 20:05:59.497694   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.497704   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:59.497715   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:59.497731   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:59.556694   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:59.556728   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:59.572160   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:59.572187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:59.649040   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:59.649063   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:59.649083   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:59.727941   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:59.727975   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.275513   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:02.290486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:02.290557   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:02.332217   72712 cri.go:89] found id: ""
	I0425 20:06:02.332255   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.332273   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:02.332281   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:02.332357   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:58.873314   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.373601   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.916947   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.418479   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.384744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.885488   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.373346   72712 cri.go:89] found id: ""
	I0425 20:06:02.373370   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.373377   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:02.373382   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:02.373439   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:02.415835   72712 cri.go:89] found id: ""
	I0425 20:06:02.415861   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.415873   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:02.415881   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:02.415939   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:02.458876   72712 cri.go:89] found id: ""
	I0425 20:06:02.458905   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.458917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:02.458926   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:02.459008   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:02.502092   72712 cri.go:89] found id: ""
	I0425 20:06:02.502127   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.502138   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:02.502146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:02.502235   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:02.546357   72712 cri.go:89] found id: ""
	I0425 20:06:02.546383   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.546393   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:02.546399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:02.546459   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:02.586842   72712 cri.go:89] found id: ""
	I0425 20:06:02.586870   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.586881   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:02.586887   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:02.586932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:02.629305   72712 cri.go:89] found id: ""
	I0425 20:06:02.629339   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.629350   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:02.629360   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:02.629374   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.676583   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:02.676626   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:02.731790   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:02.731825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:02.747473   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:02.747499   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:02.824265   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:02.824289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:02.824304   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.408968   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:05.423645   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:05.423713   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:05.467402   72712 cri.go:89] found id: ""
	I0425 20:06:05.467425   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.467434   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:05.467445   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:05.467510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:05.503131   72712 cri.go:89] found id: ""
	I0425 20:06:05.503153   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.503161   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:05.503166   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:05.503216   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:05.545694   72712 cri.go:89] found id: ""
	I0425 20:06:05.545721   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.545732   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:05.545739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:05.545804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:05.585879   72712 cri.go:89] found id: ""
	I0425 20:06:05.585905   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.585912   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:05.585917   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:05.585963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:05.625520   72712 cri.go:89] found id: ""
	I0425 20:06:05.625549   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.625560   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:05.625567   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:05.625620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:05.664306   72712 cri.go:89] found id: ""
	I0425 20:06:05.664335   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.664345   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:05.664364   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:05.664437   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:05.705353   72712 cri.go:89] found id: ""
	I0425 20:06:05.705385   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.705397   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:05.705405   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:05.705468   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:05.743935   72712 cri.go:89] found id: ""
	I0425 20:06:05.743968   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.743977   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:05.743986   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:05.743997   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:05.801190   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:05.801234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:05.817046   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:05.817074   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:05.899413   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:05.899443   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:05.899458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.986303   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:05.986336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:03.872605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:05.876833   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.373392   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.916334   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.917480   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.887784   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:09.387085   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.531748   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:08.550667   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:08.550749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:08.594062   72712 cri.go:89] found id: ""
	I0425 20:06:08.594093   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.594102   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:08.594108   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:08.594163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:08.635823   72712 cri.go:89] found id: ""
	I0425 20:06:08.635861   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.635872   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:08.635880   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:08.635944   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:08.675338   72712 cri.go:89] found id: ""
	I0425 20:06:08.675383   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.675395   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:08.675402   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:08.675463   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:08.715971   72712 cri.go:89] found id: ""
	I0425 20:06:08.716001   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.716012   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:08.716019   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:08.716088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:08.758565   72712 cri.go:89] found id: ""
	I0425 20:06:08.758597   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.758608   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:08.758616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:08.758683   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:08.800179   72712 cri.go:89] found id: ""
	I0425 20:06:08.800207   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.800218   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:08.800226   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:08.800286   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:08.854603   72712 cri.go:89] found id: ""
	I0425 20:06:08.854639   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.854651   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:08.854659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:08.854724   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:08.904115   72712 cri.go:89] found id: ""
	I0425 20:06:08.904141   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.904152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:08.904162   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:08.904177   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:08.921826   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:08.921855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:09.003667   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:09.003687   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:09.003699   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:09.086301   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:09.086346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:09.138478   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:09.138516   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:11.704402   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:11.721810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:11.721902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:11.768790   72712 cri.go:89] found id: ""
	I0425 20:06:11.768829   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.768850   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:11.768858   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:11.768928   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:11.813543   72712 cri.go:89] found id: ""
	I0425 20:06:11.813576   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.813588   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:11.813595   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:11.813654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:11.853930   72712 cri.go:89] found id: ""
	I0425 20:06:11.853962   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.853972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:11.853980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:11.854044   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:11.900808   72712 cri.go:89] found id: ""
	I0425 20:06:11.900843   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.900853   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:11.900861   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:11.900919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:11.948850   72712 cri.go:89] found id: ""
	I0425 20:06:11.948876   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.948885   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:11.948890   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:11.948945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:11.989326   72712 cri.go:89] found id: ""
	I0425 20:06:11.989356   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.989365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:11.989371   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:11.989450   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:12.033912   72712 cri.go:89] found id: ""
	I0425 20:06:12.033943   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.033954   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:12.033959   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:12.034015   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:12.076170   72712 cri.go:89] found id: ""
	I0425 20:06:12.076199   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.076209   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:12.076217   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:12.076230   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:12.124851   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:12.124881   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:12.178927   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:12.178964   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:12.194925   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:12.194952   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:12.272163   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:12.272187   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:12.272202   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:10.374908   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.871613   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:10.917911   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.918144   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:15.419043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:11.886066   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.383880   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.851400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:14.869893   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:14.869967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:14.915793   72712 cri.go:89] found id: ""
	I0425 20:06:14.915820   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.915829   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:14.915836   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:14.915896   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:14.959549   72712 cri.go:89] found id: ""
	I0425 20:06:14.959576   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.959587   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:14.959606   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:14.959672   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:15.001420   72712 cri.go:89] found id: ""
	I0425 20:06:15.001453   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.001465   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:15.001474   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:15.001552   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:15.047960   72712 cri.go:89] found id: ""
	I0425 20:06:15.047988   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.047996   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:15.048001   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:15.048049   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:15.096688   72712 cri.go:89] found id: ""
	I0425 20:06:15.096722   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.096730   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:15.096736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:15.096795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:15.142673   72712 cri.go:89] found id: ""
	I0425 20:06:15.142701   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.142712   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:15.142719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:15.142784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:15.181729   72712 cri.go:89] found id: ""
	I0425 20:06:15.181757   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.181766   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:15.181773   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:15.181820   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:15.227858   72712 cri.go:89] found id: ""
	I0425 20:06:15.227886   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.227897   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:15.227905   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:15.227917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:15.283253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:15.283293   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:15.305572   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:15.305604   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:15.439587   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:15.439615   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:15.439631   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:15.525678   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:15.525714   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:14.872914   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.873605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:17.420065   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:19.917501   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.383915   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.883746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:20.884190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.078788   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:18.095012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:18.095083   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:18.136753   72712 cri.go:89] found id: ""
	I0425 20:06:18.136784   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.136796   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:18.136802   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:18.136850   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:18.184584   72712 cri.go:89] found id: ""
	I0425 20:06:18.184606   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.184614   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:18.184619   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:18.184691   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:18.228201   72712 cri.go:89] found id: ""
	I0425 20:06:18.228250   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.228263   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:18.228270   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:18.228326   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:18.267756   72712 cri.go:89] found id: ""
	I0425 20:06:18.267778   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.267786   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:18.267792   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:18.267855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:18.309727   72712 cri.go:89] found id: ""
	I0425 20:06:18.309755   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.309763   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:18.309769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:18.309827   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:18.350549   72712 cri.go:89] found id: ""
	I0425 20:06:18.350580   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.350592   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:18.350599   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:18.350656   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:18.393868   72712 cri.go:89] found id: ""
	I0425 20:06:18.393891   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.393902   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:18.393910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:18.393989   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:18.435163   72712 cri.go:89] found id: ""
	I0425 20:06:18.435195   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.435204   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:18.435211   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:18.435224   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:18.450871   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:18.450901   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:18.534501   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:18.534526   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:18.534538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:18.616979   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:18.617015   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:18.663568   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:18.663598   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.217744   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:21.235862   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:21.235955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:21.288966   72712 cri.go:89] found id: ""
	I0425 20:06:21.288996   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.289005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:21.289014   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:21.289075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:21.362068   72712 cri.go:89] found id: ""
	I0425 20:06:21.362092   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.362101   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:21.362108   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:21.362168   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:21.416870   72712 cri.go:89] found id: ""
	I0425 20:06:21.416894   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.416901   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:21.416907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:21.416956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:21.461465   72712 cri.go:89] found id: ""
	I0425 20:06:21.461495   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.461503   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:21.461508   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:21.461570   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:21.499985   72712 cri.go:89] found id: ""
	I0425 20:06:21.500014   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.500025   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:21.500032   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:21.500081   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:21.543725   72712 cri.go:89] found id: ""
	I0425 20:06:21.543764   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.543776   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:21.543784   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:21.543841   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:21.586535   72712 cri.go:89] found id: ""
	I0425 20:06:21.586566   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.586578   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:21.586587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:21.586644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:21.627885   72712 cri.go:89] found id: ""
	I0425 20:06:21.627912   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.627921   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:21.627929   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:21.627942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.685973   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:21.686006   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:21.702529   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:21.702556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:21.781634   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:21.781660   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:21.781673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:21.862986   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:21.863027   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:19.372142   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.374479   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.918699   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.419088   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:23.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:25.883438   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.413547   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:24.428247   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:24.428323   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:24.468708   72712 cri.go:89] found id: ""
	I0425 20:06:24.468757   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.468768   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:24.468775   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:24.468836   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:24.507667   72712 cri.go:89] found id: ""
	I0425 20:06:24.507694   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.507702   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:24.507708   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:24.507769   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:24.548537   72712 cri.go:89] found id: ""
	I0425 20:06:24.548562   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.548570   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:24.548576   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:24.548625   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:24.591240   72712 cri.go:89] found id: ""
	I0425 20:06:24.591264   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.591272   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:24.591280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:24.591325   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:24.631530   72712 cri.go:89] found id: ""
	I0425 20:06:24.631557   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.631568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:24.631575   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:24.631642   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:24.672878   72712 cri.go:89] found id: ""
	I0425 20:06:24.672903   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.672911   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:24.672916   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:24.672960   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:24.716168   72712 cri.go:89] found id: ""
	I0425 20:06:24.716193   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.716201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:24.716206   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:24.716256   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:24.758061   72712 cri.go:89] found id: ""
	I0425 20:06:24.758098   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.758110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:24.758122   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:24.758135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:24.839866   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:24.839900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:24.889288   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:24.889380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:24.946445   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:24.946488   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:24.963093   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:24.963126   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:25.044921   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:23.874297   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.372055   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.375436   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.916503   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.916669   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.887709   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.384645   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.545838   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:27.562659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:27.562717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:27.606462   72712 cri.go:89] found id: ""
	I0425 20:06:27.606491   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.606501   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:27.606509   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:27.606567   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:27.650475   72712 cri.go:89] found id: ""
	I0425 20:06:27.650505   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.650517   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:27.650524   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:27.650583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:27.695163   72712 cri.go:89] found id: ""
	I0425 20:06:27.695190   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.695201   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:27.695208   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:27.695265   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:27.741798   72712 cri.go:89] found id: ""
	I0425 20:06:27.741832   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.741842   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:27.741849   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:27.741904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:27.784146   72712 cri.go:89] found id: ""
	I0425 20:06:27.784175   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.784185   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:27.784193   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:27.784253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:27.827179   72712 cri.go:89] found id: ""
	I0425 20:06:27.827213   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.827225   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:27.827234   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:27.827298   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:27.872941   72712 cri.go:89] found id: ""
	I0425 20:06:27.872961   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.872980   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:27.872985   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:27.873040   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:27.917920   72712 cri.go:89] found id: ""
	I0425 20:06:27.917949   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.917959   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:27.917970   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:27.917985   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:27.971411   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:27.971455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:27.988704   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:27.988743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:28.064208   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:28.064229   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:28.064242   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:28.147388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:28.147427   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.694349   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:30.708595   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:30.708671   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:30.752963   72712 cri.go:89] found id: ""
	I0425 20:06:30.752994   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.753005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:30.753012   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:30.753073   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:30.795453   72712 cri.go:89] found id: ""
	I0425 20:06:30.795488   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.795498   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:30.795507   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:30.795574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:30.838945   72712 cri.go:89] found id: ""
	I0425 20:06:30.838970   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.838978   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:30.838984   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:30.839042   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:30.886128   72712 cri.go:89] found id: ""
	I0425 20:06:30.886160   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.886170   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:30.886178   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:30.886255   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:30.927773   72712 cri.go:89] found id: ""
	I0425 20:06:30.927805   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.927819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:30.927827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:30.927893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:30.968628   72712 cri.go:89] found id: ""
	I0425 20:06:30.968660   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.968672   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:30.968680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:30.968743   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:31.014590   72712 cri.go:89] found id: ""
	I0425 20:06:31.014616   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.014627   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:31.014634   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:31.014697   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:31.053236   72712 cri.go:89] found id: ""
	I0425 20:06:31.053262   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.053274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:31.053285   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:31.053301   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:31.107797   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:31.107834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:31.123675   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:31.123702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:31.201180   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:31.201204   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:31.201215   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:31.289474   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:31.289512   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.873981   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.373083   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.918572   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.420043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:35.421384   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:32.883164   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:34.883697   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.840828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:33.857736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:33.857795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:33.898621   72712 cri.go:89] found id: ""
	I0425 20:06:33.898647   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.898658   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:33.898665   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:33.898727   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:33.939211   72712 cri.go:89] found id: ""
	I0425 20:06:33.939234   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.939245   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:33.939250   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:33.939305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:33.981872   72712 cri.go:89] found id: ""
	I0425 20:06:33.981896   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.981903   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:33.981909   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:33.981965   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:34.027570   72712 cri.go:89] found id: ""
	I0425 20:06:34.027597   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.027609   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:34.027617   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:34.027675   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:34.072544   72712 cri.go:89] found id: ""
	I0425 20:06:34.072570   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.072586   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:34.072594   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:34.072674   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:34.119326   72712 cri.go:89] found id: ""
	I0425 20:06:34.119349   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.119358   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:34.119366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:34.119423   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:34.169618   72712 cri.go:89] found id: ""
	I0425 20:06:34.169642   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.169650   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:34.169655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:34.169705   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:34.213570   72712 cri.go:89] found id: ""
	I0425 20:06:34.213593   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.213601   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:34.213609   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:34.213621   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:34.255722   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:34.255756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:34.311113   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:34.311147   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:34.326869   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:34.326897   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:34.399765   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:34.399788   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:34.399801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:36.986610   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:37.003090   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:37.003163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:37.045929   72712 cri.go:89] found id: ""
	I0425 20:06:37.045956   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.045964   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:37.045969   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:37.046022   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:37.086835   72712 cri.go:89] found id: ""
	I0425 20:06:37.086868   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.086879   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:37.086885   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:37.086937   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:37.127454   72712 cri.go:89] found id: ""
	I0425 20:06:37.127479   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.127488   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:37.127494   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:37.127551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:37.168878   72712 cri.go:89] found id: ""
	I0425 20:06:37.168904   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.168917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:37.168924   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:37.168986   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:37.208859   72712 cri.go:89] found id: ""
	I0425 20:06:37.208889   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.208901   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:37.208914   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:37.208970   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:37.250407   72712 cri.go:89] found id: ""
	I0425 20:06:37.250439   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.250452   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:37.250467   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:37.250536   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:37.291004   72712 cri.go:89] found id: ""
	I0425 20:06:37.291040   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.291054   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:37.291063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:37.291125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:37.335573   72712 cri.go:89] found id: ""
	I0425 20:06:37.335597   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.335608   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:37.335619   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:37.335635   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:35.873065   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.371805   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.426152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:39.916340   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:36.884518   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.884859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.392773   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:37.392810   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:37.408311   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:37.408343   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:37.491376   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:37.491402   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:37.491416   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:37.574559   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:37.574600   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.125241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:40.142254   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:40.142347   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:40.186859   72712 cri.go:89] found id: ""
	I0425 20:06:40.186893   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.186904   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:40.186911   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:40.186972   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:40.229247   72712 cri.go:89] found id: ""
	I0425 20:06:40.229275   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.229288   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:40.229295   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:40.229361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:40.268853   72712 cri.go:89] found id: ""
	I0425 20:06:40.268879   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.268890   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:40.268897   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:40.268959   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:40.307621   72712 cri.go:89] found id: ""
	I0425 20:06:40.307650   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.307669   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:40.307677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:40.307732   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:40.351448   72712 cri.go:89] found id: ""
	I0425 20:06:40.351472   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.351484   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:40.351492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:40.351548   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:40.396771   72712 cri.go:89] found id: ""
	I0425 20:06:40.396804   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.396815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:40.396824   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:40.396890   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:40.443605   72712 cri.go:89] found id: ""
	I0425 20:06:40.443634   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.443642   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:40.443647   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:40.443694   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:40.495496   72712 cri.go:89] found id: ""
	I0425 20:06:40.495525   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.495536   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:40.495548   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:40.495563   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.539428   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:40.539457   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:40.596259   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:40.596305   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:40.613140   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:40.613167   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:40.701768   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:40.701793   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:40.701805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:40.372225   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:42.373541   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.916879   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.917783   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.386292   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.885441   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.294502   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:43.310041   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:43.310113   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:43.351841   72712 cri.go:89] found id: ""
	I0425 20:06:43.351864   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.351872   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:43.351877   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:43.351924   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:43.395467   72712 cri.go:89] found id: ""
	I0425 20:06:43.395497   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.395509   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:43.395516   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:43.395576   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:43.437256   72712 cri.go:89] found id: ""
	I0425 20:06:43.437354   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.437375   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:43.437384   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:43.437465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:43.480744   72712 cri.go:89] found id: ""
	I0425 20:06:43.480772   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.480783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:43.480791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:43.480839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:43.519916   72712 cri.go:89] found id: ""
	I0425 20:06:43.519951   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.519961   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:43.519975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:43.520039   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:43.557861   72712 cri.go:89] found id: ""
	I0425 20:06:43.557890   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.557901   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:43.557910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:43.557968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:43.594423   72712 cri.go:89] found id: ""
	I0425 20:06:43.594449   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.594458   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:43.594464   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:43.594512   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:43.632227   72712 cri.go:89] found id: ""
	I0425 20:06:43.632253   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.632262   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:43.632270   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:43.632281   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:43.688307   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:43.688336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:43.703382   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:43.703407   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:43.782073   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:43.782093   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:43.782109   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:43.872811   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:43.872842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:46.420420   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:46.435110   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:46.435174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:46.474019   72712 cri.go:89] found id: ""
	I0425 20:06:46.474044   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.474054   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:46.474067   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:46.474125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:46.517053   72712 cri.go:89] found id: ""
	I0425 20:06:46.517078   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.517088   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:46.517096   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:46.517150   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:46.560934   72712 cri.go:89] found id: ""
	I0425 20:06:46.560963   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.560972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:46.560977   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:46.561030   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:46.605969   72712 cri.go:89] found id: ""
	I0425 20:06:46.605997   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.606007   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:46.606012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:46.606061   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:46.647025   72712 cri.go:89] found id: ""
	I0425 20:06:46.647049   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.647058   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:46.647063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:46.647118   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:46.686931   72712 cri.go:89] found id: ""
	I0425 20:06:46.686956   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.686966   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:46.686975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:46.687053   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:46.727183   72712 cri.go:89] found id: ""
	I0425 20:06:46.727207   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.727216   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:46.727224   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:46.727277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:46.768030   72712 cri.go:89] found id: ""
	I0425 20:06:46.768059   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.768073   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:46.768085   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:46.768105   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:46.823400   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:46.823439   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:46.838443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:46.838468   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:46.919509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:46.919527   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:46.919538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:46.996250   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:46.996284   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:44.873706   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.874042   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:45.918619   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.418507   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.384559   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.884184   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.885081   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:49.542696   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:49.557346   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:49.557444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:49.595195   72712 cri.go:89] found id: ""
	I0425 20:06:49.595220   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.595231   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:49.595238   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:49.595305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:49.641324   72712 cri.go:89] found id: ""
	I0425 20:06:49.641354   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.641365   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:49.641373   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:49.641426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:49.681510   72712 cri.go:89] found id: ""
	I0425 20:06:49.681540   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.681552   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:49.681559   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:49.681620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:49.721482   72712 cri.go:89] found id: ""
	I0425 20:06:49.721509   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.721518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:49.721525   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:49.721581   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:49.762682   72712 cri.go:89] found id: ""
	I0425 20:06:49.762710   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.762723   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:49.762731   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:49.762793   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:49.801892   72712 cri.go:89] found id: ""
	I0425 20:06:49.801920   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.801932   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:49.801943   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:49.802002   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:49.840347   72712 cri.go:89] found id: ""
	I0425 20:06:49.840376   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.840387   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:49.840395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:49.840458   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:49.898486   72712 cri.go:89] found id: ""
	I0425 20:06:49.898516   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.898527   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:49.898536   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:49.898547   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:49.952735   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:49.952775   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:49.967986   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:49.968018   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:50.048003   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:50.048024   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:50.048040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:50.126062   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:50.126098   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:49.373031   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:51.873671   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.917641   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.418642   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.421542   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.384273   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.384393   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:52.679721   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:52.695636   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:52.695700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:52.738329   72712 cri.go:89] found id: ""
	I0425 20:06:52.738359   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.738368   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:52.738374   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:52.738420   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:52.779388   72712 cri.go:89] found id: ""
	I0425 20:06:52.779418   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.779426   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:52.779433   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:52.779496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:52.821105   72712 cri.go:89] found id: ""
	I0425 20:06:52.821137   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.821149   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:52.821168   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:52.821231   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:52.861781   72712 cri.go:89] found id: ""
	I0425 20:06:52.861817   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.861825   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:52.861831   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:52.861885   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:52.904602   72712 cri.go:89] found id: ""
	I0425 20:06:52.904633   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.904644   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:52.904651   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:52.904712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:52.951137   72712 cri.go:89] found id: ""
	I0425 20:06:52.951174   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.951183   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:52.951188   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:52.951234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:52.994199   72712 cri.go:89] found id: ""
	I0425 20:06:52.994249   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.994257   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:52.994262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:52.994315   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:53.031997   72712 cri.go:89] found id: ""
	I0425 20:06:53.032020   72712 logs.go:276] 0 containers: []
	W0425 20:06:53.032027   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:53.032035   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:53.032046   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:53.111351   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:53.111383   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:53.162470   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:53.162504   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:53.217188   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:53.217223   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:53.233071   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:53.233100   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:53.308983   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:55.809162   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:55.825185   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:55.825259   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:55.865963   72712 cri.go:89] found id: ""
	I0425 20:06:55.865989   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.866001   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:55.866009   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:55.866060   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:55.920565   72712 cri.go:89] found id: ""
	I0425 20:06:55.920601   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.920612   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:55.920620   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:55.920677   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:55.962643   72712 cri.go:89] found id: ""
	I0425 20:06:55.962669   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.962677   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:55.962684   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:55.962738   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:56.000737   72712 cri.go:89] found id: ""
	I0425 20:06:56.000764   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.000773   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:56.000782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:56.000828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:56.042226   72712 cri.go:89] found id: ""
	I0425 20:06:56.042251   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.042259   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:56.042265   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:56.042316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:56.080765   72712 cri.go:89] found id: ""
	I0425 20:06:56.080788   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.080798   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:56.080810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:56.080869   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:56.119563   72712 cri.go:89] found id: ""
	I0425 20:06:56.119590   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.119602   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:56.119608   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:56.119667   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:56.160136   72712 cri.go:89] found id: ""
	I0425 20:06:56.160162   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.160170   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:56.160179   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:56.160193   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:56.213506   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:56.213539   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:56.232121   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:56.232150   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:56.336606   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:56.336629   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:56.336640   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:56.426867   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:56.426908   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:54.374441   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:56.374847   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.916077   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.916521   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.384779   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.884281   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:58.975395   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:58.991064   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:58.991125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:59.031157   72712 cri.go:89] found id: ""
	I0425 20:06:59.031179   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.031190   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:59.031197   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:59.031253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:59.071893   72712 cri.go:89] found id: ""
	I0425 20:06:59.071923   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.071931   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:59.071937   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:59.071998   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:59.114714   72712 cri.go:89] found id: ""
	I0425 20:06:59.114749   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.114760   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:59.114768   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:59.114840   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:59.159482   72712 cri.go:89] found id: ""
	I0425 20:06:59.159510   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.159518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:59.159523   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:59.159575   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:59.201218   72712 cri.go:89] found id: ""
	I0425 20:06:59.201245   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.201253   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:59.201263   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:59.201312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:59.247277   72712 cri.go:89] found id: ""
	I0425 20:06:59.247305   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.247316   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:59.247324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:59.247379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:59.286713   72712 cri.go:89] found id: ""
	I0425 20:06:59.286738   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.286746   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:59.286751   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:59.286804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:59.332263   72712 cri.go:89] found id: ""
	I0425 20:06:59.332296   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.332320   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:59.332332   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:59.332346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:59.416446   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:59.416477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:59.462125   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:59.462166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:59.514881   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:59.514907   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:59.530109   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:59.530134   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:59.605820   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.106478   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:02.124859   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:02.124934   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:02.180491   72712 cri.go:89] found id: ""
	I0425 20:07:02.180526   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.180537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:02.180545   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:02.180601   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:02.237075   72712 cri.go:89] found id: ""
	I0425 20:07:02.237104   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.237118   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:02.237126   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:02.237190   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:02.295104   72712 cri.go:89] found id: ""
	I0425 20:07:02.295129   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.295140   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:02.295148   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:02.295210   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:02.335392   72712 cri.go:89] found id: ""
	I0425 20:07:02.335418   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.335428   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:02.335435   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:02.335496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:58.871748   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.372545   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.373424   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.917135   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.917504   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.885744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:04.385280   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:02.376964   72712 cri.go:89] found id: ""
	I0425 20:07:02.376990   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.377002   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:02.377009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:02.377066   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:02.415460   72712 cri.go:89] found id: ""
	I0425 20:07:02.415484   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.415491   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:02.415496   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:02.415550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:02.461946   72712 cri.go:89] found id: ""
	I0425 20:07:02.461972   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.461993   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:02.462009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:02.462075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:02.502829   72712 cri.go:89] found id: ""
	I0425 20:07:02.502851   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.502858   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:02.502866   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:02.502878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:02.558264   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:02.558296   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:02.574175   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:02.574225   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:02.649363   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.649389   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:02.649404   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:02.730528   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:02.730560   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.276648   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:05.292055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:05.292121   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:05.332849   72712 cri.go:89] found id: ""
	I0425 20:07:05.332874   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.332884   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:05.332892   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:05.332954   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:05.376446   72712 cri.go:89] found id: ""
	I0425 20:07:05.376475   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.376487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:05.376494   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:05.376556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:05.418635   72712 cri.go:89] found id: ""
	I0425 20:07:05.418664   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.418675   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:05.418682   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:05.418745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:05.459082   72712 cri.go:89] found id: ""
	I0425 20:07:05.459113   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.459123   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:05.459128   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:05.459175   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:05.498473   72712 cri.go:89] found id: ""
	I0425 20:07:05.498502   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.498514   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:05.498521   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:05.498578   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:05.543121   72712 cri.go:89] found id: ""
	I0425 20:07:05.543150   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.543159   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:05.543164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:05.543211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:05.585722   72712 cri.go:89] found id: ""
	I0425 20:07:05.585748   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.585758   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:05.585766   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:05.585826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:05.629614   72712 cri.go:89] found id: ""
	I0425 20:07:05.629647   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.629661   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:05.629671   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:05.629685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:05.683974   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:05.684007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:05.700651   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:05.700685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:05.782097   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:05.782127   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:05.782142   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:05.863881   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:05.863918   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.374553   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:07.872114   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.417080   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.417436   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:10.418259   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.885509   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:09.383078   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.412898   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:08.428152   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:08.428206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:08.468403   72712 cri.go:89] found id: ""
	I0425 20:07:08.468441   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.468455   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:08.468464   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:08.468529   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:08.511246   72712 cri.go:89] found id: ""
	I0425 20:07:08.511285   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.511297   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:08.511304   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:08.511363   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:08.553121   72712 cri.go:89] found id: ""
	I0425 20:07:08.553148   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.553155   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:08.553161   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:08.553214   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:08.589723   72712 cri.go:89] found id: ""
	I0425 20:07:08.589745   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.589755   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:08.589762   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:08.589826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:08.629502   72712 cri.go:89] found id: ""
	I0425 20:07:08.629525   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.629533   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:08.629538   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:08.629591   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:08.677107   72712 cri.go:89] found id: ""
	I0425 20:07:08.677144   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.677153   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:08.677164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:08.677212   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:08.716501   72712 cri.go:89] found id: ""
	I0425 20:07:08.716531   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.716542   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:08.716550   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:08.716611   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:08.763473   72712 cri.go:89] found id: ""
	I0425 20:07:08.763503   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.763515   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:08.763526   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:08.763543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:08.848961   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:08.848985   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:08.849000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:08.945851   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:08.945890   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:08.989429   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:08.989460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:09.042721   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:09.042756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.559400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:11.575100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:11.575180   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:11.613246   72712 cri.go:89] found id: ""
	I0425 20:07:11.613271   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.613284   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:11.613290   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:11.613351   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:11.655158   72712 cri.go:89] found id: ""
	I0425 20:07:11.655189   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.655200   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:11.655208   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:11.655266   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:11.695122   72712 cri.go:89] found id: ""
	I0425 20:07:11.695144   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.695151   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:11.695156   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:11.695205   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:11.735578   72712 cri.go:89] found id: ""
	I0425 20:07:11.735604   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.735615   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:11.735621   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:11.735680   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:11.774750   72712 cri.go:89] found id: ""
	I0425 20:07:11.774785   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.774795   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:11.774803   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:11.774855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:11.814878   72712 cri.go:89] found id: ""
	I0425 20:07:11.814908   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.814920   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:11.814939   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:11.815000   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:11.853262   72712 cri.go:89] found id: ""
	I0425 20:07:11.853295   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.853306   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:11.853313   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:11.853379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:11.897291   72712 cri.go:89] found id: ""
	I0425 20:07:11.897314   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.897324   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:11.897333   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:11.897348   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:11.956913   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:11.956945   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.973787   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:11.973821   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:12.055801   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:12.055826   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:12.055842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:12.140238   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:12.140270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:10.372634   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.374037   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.418299   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.919967   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:11.383994   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:13.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:15.884319   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.685296   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:14.699655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:14.699740   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:14.741907   72712 cri.go:89] found id: ""
	I0425 20:07:14.741936   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.741947   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:14.741955   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:14.742017   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:14.786457   72712 cri.go:89] found id: ""
	I0425 20:07:14.786479   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.786487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:14.786493   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:14.786537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:14.825010   72712 cri.go:89] found id: ""
	I0425 20:07:14.825042   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.825055   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:14.825063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:14.825124   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:14.874834   72712 cri.go:89] found id: ""
	I0425 20:07:14.874856   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.874867   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:14.874875   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:14.874933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:14.914636   72712 cri.go:89] found id: ""
	I0425 20:07:14.914674   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.914685   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:14.914693   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:14.914752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:14.959327   72712 cri.go:89] found id: ""
	I0425 20:07:14.959356   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.959365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:14.959372   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:14.959425   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:15.000637   72712 cri.go:89] found id: ""
	I0425 20:07:15.000666   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.000674   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:15.000680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:15.000728   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:15.040497   72712 cri.go:89] found id: ""
	I0425 20:07:15.040523   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.040531   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:15.040539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:15.040550   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:15.120206   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:15.120240   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:15.168292   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:15.168324   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:15.222133   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:15.222164   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:15.237719   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:15.237746   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:15.323404   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:14.872743   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.375231   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.420149   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:19.420277   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:18.384902   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:20.883469   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.823552   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:17.838837   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:17.838911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:17.880547   72712 cri.go:89] found id: ""
	I0425 20:07:17.880584   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.880595   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:17.880608   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:17.880669   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:17.929700   72712 cri.go:89] found id: ""
	I0425 20:07:17.929730   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.929742   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:17.929797   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:17.929861   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:17.974057   72712 cri.go:89] found id: ""
	I0425 20:07:17.974081   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.974088   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:17.974094   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:17.974142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:18.013173   72712 cri.go:89] found id: ""
	I0425 20:07:18.013200   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.013209   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:18.013215   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:18.013267   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:18.053525   72712 cri.go:89] found id: ""
	I0425 20:07:18.053557   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.053568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:18.053580   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:18.053644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:18.095972   72712 cri.go:89] found id: ""
	I0425 20:07:18.096004   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.096016   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:18.096024   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:18.096089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:18.136792   72712 cri.go:89] found id: ""
	I0425 20:07:18.136823   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.136834   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:18.136842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:18.136904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:18.176562   72712 cri.go:89] found id: ""
	I0425 20:07:18.176594   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.176605   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:18.176619   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:18.176634   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:18.254402   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:18.254440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:18.298075   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:18.298112   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:18.356091   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:18.356124   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:18.373788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:18.373822   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:18.452545   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:20.952752   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:20.972054   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:20.972133   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:21.015572   72712 cri.go:89] found id: ""
	I0425 20:07:21.015602   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.015613   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:21.015621   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:21.015689   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:21.053313   72712 cri.go:89] found id: ""
	I0425 20:07:21.053342   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.053352   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:21.053359   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:21.053422   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:21.090343   72712 cri.go:89] found id: ""
	I0425 20:07:21.090373   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.090384   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:21.090391   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:21.090472   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:21.127148   72712 cri.go:89] found id: ""
	I0425 20:07:21.127174   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.127184   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:21.127192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:21.127258   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:21.167175   72712 cri.go:89] found id: ""
	I0425 20:07:21.167199   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.167207   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:21.167212   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:21.167263   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:21.212740   72712 cri.go:89] found id: ""
	I0425 20:07:21.212771   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.212783   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:21.212791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:21.212856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:21.250751   72712 cri.go:89] found id: ""
	I0425 20:07:21.250774   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.250782   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:21.250788   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:21.250833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:21.292387   72712 cri.go:89] found id: ""
	I0425 20:07:21.292414   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.292426   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:21.292436   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:21.292451   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:21.337695   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:21.337726   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:21.395479   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:21.395520   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:21.411538   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:21.411564   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:21.493248   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:21.493270   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:21.493282   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:19.873680   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.372461   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:21.421770   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:23.426808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.883520   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.884554   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.076755   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:24.093549   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:24.093624   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:24.135660   72712 cri.go:89] found id: ""
	I0425 20:07:24.135686   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.135694   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:24.135705   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:24.135784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:24.179778   72712 cri.go:89] found id: ""
	I0425 20:07:24.179799   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.179807   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:24.179824   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:24.179883   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.226745   72712 cri.go:89] found id: ""
	I0425 20:07:24.226771   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.226780   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:24.226785   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:24.226839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:24.273302   72712 cri.go:89] found id: ""
	I0425 20:07:24.273327   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.273347   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:24.273354   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:24.273421   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:24.314117   72712 cri.go:89] found id: ""
	I0425 20:07:24.314149   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.314160   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:24.314167   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:24.314247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:24.353144   72712 cri.go:89] found id: ""
	I0425 20:07:24.353173   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.353184   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:24.353192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:24.353292   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:24.395899   72712 cri.go:89] found id: ""
	I0425 20:07:24.395925   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.395933   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:24.395938   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:24.395988   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:24.444470   72712 cri.go:89] found id: ""
	I0425 20:07:24.444503   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.444514   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:24.444525   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:24.444540   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:24.499845   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:24.499876   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:24.517421   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:24.517449   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:24.596509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:24.596530   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:24.596543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:24.710844   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:24.710878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.259541   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:27.275551   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:27.275609   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:27.314610   72712 cri.go:89] found id: ""
	I0425 20:07:27.314640   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.314651   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:27.314656   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:27.314712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:27.350100   72712 cri.go:89] found id: ""
	I0425 20:07:27.350132   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.350151   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:27.350158   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:27.350226   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.373886   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:26.873863   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:25.917794   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:28.417757   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:30.419922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.384565   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:29.385043   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.390197   72712 cri.go:89] found id: ""
	I0425 20:07:27.390238   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.390249   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:27.390257   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:27.390312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:27.431936   72712 cri.go:89] found id: ""
	I0425 20:07:27.431961   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.431973   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:27.431980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:27.432038   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:27.469175   72712 cri.go:89] found id: ""
	I0425 20:07:27.469204   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.469212   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:27.469218   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:27.469276   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:27.509385   72712 cri.go:89] found id: ""
	I0425 20:07:27.509416   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.509428   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:27.509436   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:27.509503   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:27.548997   72712 cri.go:89] found id: ""
	I0425 20:07:27.549034   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.549045   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:27.549052   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:27.549111   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:27.588925   72712 cri.go:89] found id: ""
	I0425 20:07:27.588959   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.588973   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:27.588985   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:27.589000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.635005   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:27.635040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:27.686587   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:27.686617   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:27.702913   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:27.702942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:27.775525   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:27.775551   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:27.775562   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.352358   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:30.367016   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:30.367088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:30.410878   72712 cri.go:89] found id: ""
	I0425 20:07:30.410906   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.410917   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:30.410927   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:30.410985   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:30.456150   72712 cri.go:89] found id: ""
	I0425 20:07:30.456173   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.456181   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:30.456186   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:30.456234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:30.495409   72712 cri.go:89] found id: ""
	I0425 20:07:30.495439   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.495450   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:30.495458   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:30.495516   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:30.535863   72712 cri.go:89] found id: ""
	I0425 20:07:30.535895   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.535906   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:30.535912   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:30.535971   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:30.573772   72712 cri.go:89] found id: ""
	I0425 20:07:30.573808   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.573819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:30.573826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:30.573892   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:30.626310   72712 cri.go:89] found id: ""
	I0425 20:07:30.626350   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.626362   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:30.626376   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:30.626438   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:30.666302   72712 cri.go:89] found id: ""
	I0425 20:07:30.666332   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.666343   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:30.666350   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:30.666413   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:30.703478   72712 cri.go:89] found id: ""
	I0425 20:07:30.703507   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.703519   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:30.703529   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:30.703543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:30.756532   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:30.756566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:30.772128   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:30.772158   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:30.853701   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:30.853728   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:30.853743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.935879   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:30.935917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:29.372219   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.872125   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:32.865998   72220 pod_ready.go:81] duration metric: took 4m0.000690329s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:32.866038   72220 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0425 20:07:32.866057   72220 pod_ready.go:38] duration metric: took 4m13.047288103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:32.866091   72220 kubeadm.go:591] duration metric: took 4m22.882679222s to restartPrimaryControlPlane
	W0425 20:07:32.866150   72220 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:32.866182   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:32.917319   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:35.421922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.886418   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.894776   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.483702   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:33.498238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:33.498310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:33.545696   72712 cri.go:89] found id: ""
	I0425 20:07:33.545723   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.545731   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:33.545737   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:33.545791   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:33.590808   72712 cri.go:89] found id: ""
	I0425 20:07:33.590837   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.590849   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:33.590857   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:33.590919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:33.634529   72712 cri.go:89] found id: ""
	I0425 20:07:33.634554   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.634562   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:33.634572   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:33.634640   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:33.679055   72712 cri.go:89] found id: ""
	I0425 20:07:33.679082   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.679093   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:33.679100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:33.679160   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:33.720653   72712 cri.go:89] found id: ""
	I0425 20:07:33.720686   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.720698   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:33.720706   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:33.720777   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:33.766163   72712 cri.go:89] found id: ""
	I0425 20:07:33.766221   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.766233   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:33.766241   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:33.766314   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:33.810804   72712 cri.go:89] found id: ""
	I0425 20:07:33.810830   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.810839   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:33.810844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:33.810908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:33.858109   72712 cri.go:89] found id: ""
	I0425 20:07:33.858140   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.858152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:33.858162   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:33.858176   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:33.926296   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:33.926333   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:33.944220   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:33.944249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:34.042119   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:34.042191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:34.042234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:34.143694   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:34.143732   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:36.691575   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:36.710408   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:36.710490   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:36.760097   72712 cri.go:89] found id: ""
	I0425 20:07:36.760135   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.760144   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:36.760150   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:36.760208   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:36.801508   72712 cri.go:89] found id: ""
	I0425 20:07:36.801532   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.801541   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:36.801546   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:36.801602   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:36.842293   72712 cri.go:89] found id: ""
	I0425 20:07:36.842328   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.842340   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:36.842355   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:36.842418   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:36.884101   72712 cri.go:89] found id: ""
	I0425 20:07:36.884131   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.884141   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:36.884149   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:36.884211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:36.925007   72712 cri.go:89] found id: ""
	I0425 20:07:36.925032   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.925039   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:36.925045   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:36.925109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:36.964975   72712 cri.go:89] found id: ""
	I0425 20:07:36.965009   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.965020   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:36.965028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:36.965088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:37.030956   72712 cri.go:89] found id: ""
	I0425 20:07:37.030987   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.030999   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:37.031007   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:37.031080   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:37.105919   72712 cri.go:89] found id: ""
	I0425 20:07:37.105946   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.105956   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:37.105967   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:37.105983   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:37.196376   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:37.196415   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:37.240296   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:37.240334   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:37.304336   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:37.304371   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:37.323146   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:37.323184   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:37.918245   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.418671   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:36.384384   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:38.387656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.883973   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	W0425 20:07:37.414563   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:39.915087   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:39.930987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:39.931068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:39.967641   72712 cri.go:89] found id: ""
	I0425 20:07:39.967682   72712 logs.go:276] 0 containers: []
	W0425 20:07:39.967693   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:39.967698   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:39.967755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:40.009924   72712 cri.go:89] found id: ""
	I0425 20:07:40.009951   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.009959   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:40.009969   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:40.010019   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:40.049644   72712 cri.go:89] found id: ""
	I0425 20:07:40.049675   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.049689   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:40.049697   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:40.049759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:40.090487   72712 cri.go:89] found id: ""
	I0425 20:07:40.090509   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.090519   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:40.090524   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:40.090583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:40.137634   72712 cri.go:89] found id: ""
	I0425 20:07:40.137664   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.137674   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:40.137681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:40.137745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:40.174832   72712 cri.go:89] found id: ""
	I0425 20:07:40.174863   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.174874   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:40.174882   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:40.174947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:40.212559   72712 cri.go:89] found id: ""
	I0425 20:07:40.212585   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.212593   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:40.212598   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:40.212687   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:40.253459   72712 cri.go:89] found id: ""
	I0425 20:07:40.253494   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.253506   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:40.253518   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:40.253533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:40.311253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:40.311288   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:40.326693   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:40.326722   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:40.405792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:40.405816   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:40.405831   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:40.486712   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:40.486749   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:42.419025   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:44.916387   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:41.387375   72304 pod_ready.go:81] duration metric: took 4m0.010411263s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:41.387396   72304 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:07:41.387402   72304 pod_ready.go:38] duration metric: took 4m6.083068398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:41.387414   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:07:41.387441   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:41.387498   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:41.459873   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:41.459899   72304 cri.go:89] found id: ""
	I0425 20:07:41.459907   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:41.459960   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.465470   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:41.465534   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:41.509504   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:41.509523   72304 cri.go:89] found id: ""
	I0425 20:07:41.509530   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:41.509584   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.515012   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:41.515070   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:41.562701   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:41.562727   72304 cri.go:89] found id: ""
	I0425 20:07:41.562737   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:41.562792   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.567856   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:41.567928   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:41.618411   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:41.618441   72304 cri.go:89] found id: ""
	I0425 20:07:41.618452   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:41.618510   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.625757   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:41.625826   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:41.672707   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:41.672734   72304 cri.go:89] found id: ""
	I0425 20:07:41.672741   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:41.672785   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.678040   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:41.678119   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:41.725172   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:41.725196   72304 cri.go:89] found id: ""
	I0425 20:07:41.725205   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:41.725264   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.730651   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:41.730718   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:41.777224   72304 cri.go:89] found id: ""
	I0425 20:07:41.777269   72304 logs.go:276] 0 containers: []
	W0425 20:07:41.777280   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:41.777290   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:41.777380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:41.821498   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:41.821524   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:41.821531   72304 cri.go:89] found id: ""
	I0425 20:07:41.821541   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:41.821599   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.827065   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.831900   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:41.831924   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:41.893198   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:41.893233   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:41.909141   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:41.909169   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:42.051260   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:42.051305   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:42.109173   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:42.109214   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:42.155862   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:42.155894   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:42.222430   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:42.222466   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:42.265323   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:42.265353   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:42.316534   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:42.316569   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:42.363543   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:42.363568   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:42.422389   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:42.422421   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:42.471230   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:42.471259   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.011223   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.011263   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:45.578411   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:45.597748   72304 api_server.go:72] duration metric: took 4m16.066757074s to wait for apiserver process to appear ...
	I0425 20:07:45.597777   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:07:45.597813   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:45.597861   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:45.649452   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:45.649481   72304 cri.go:89] found id: ""
	I0425 20:07:45.649491   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:45.649534   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.654965   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:45.655023   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:45.701151   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:45.701177   72304 cri.go:89] found id: ""
	I0425 20:07:45.701186   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:45.701238   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.706702   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:45.706767   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:45.763142   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:45.763167   72304 cri.go:89] found id: ""
	I0425 20:07:45.763177   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:45.763220   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.768626   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:45.768684   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:45.816615   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:45.816648   72304 cri.go:89] found id: ""
	I0425 20:07:45.816656   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:45.816701   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.822714   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:45.822790   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:45.875652   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:45.875678   72304 cri.go:89] found id: ""
	I0425 20:07:45.875688   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:45.875737   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.881649   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:45.881719   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:45.930631   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:45.930656   72304 cri.go:89] found id: ""
	I0425 20:07:45.930666   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:45.930721   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.939712   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:45.939783   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:45.984646   72304 cri.go:89] found id: ""
	I0425 20:07:45.984684   72304 logs.go:276] 0 containers: []
	W0425 20:07:45.984693   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:45.984699   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:45.984754   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:46.029752   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.029777   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.029782   72304 cri.go:89] found id: ""
	I0425 20:07:46.029789   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:46.029845   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.035189   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.040479   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:46.040503   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:46.101469   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:46.101509   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:46.167362   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:46.167401   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:46.217732   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:46.217759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:46.264372   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:46.264404   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:43.037730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:43.064471   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:43.064550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:43.130075   72712 cri.go:89] found id: ""
	I0425 20:07:43.130111   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.130129   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:43.130136   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:43.130195   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:43.169628   72712 cri.go:89] found id: ""
	I0425 20:07:43.169663   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.169675   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:43.169682   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:43.169748   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:43.214845   72712 cri.go:89] found id: ""
	I0425 20:07:43.214869   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.214877   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:43.214883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:43.214929   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:43.263047   72712 cri.go:89] found id: ""
	I0425 20:07:43.263069   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.263078   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:43.263083   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:43.263142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:43.313179   72712 cri.go:89] found id: ""
	I0425 20:07:43.313213   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.313223   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:43.313231   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:43.313295   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:43.353440   72712 cri.go:89] found id: ""
	I0425 20:07:43.353468   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.353480   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:43.353488   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:43.353546   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:43.392261   72712 cri.go:89] found id: ""
	I0425 20:07:43.392288   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.392296   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:43.392321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:43.392378   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:43.431111   72712 cri.go:89] found id: ""
	I0425 20:07:43.431139   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.431147   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:43.431155   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:43.431165   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:43.485087   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:43.485120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:43.501508   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:43.501536   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:43.586041   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:43.586073   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:43.586089   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.663194   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.663232   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:46.218461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:46.233195   72712 kubeadm.go:591] duration metric: took 4m4.06065248s to restartPrimaryControlPlane
	W0425 20:07:46.233281   72712 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:46.233311   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:48.166680   72712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.933342568s)
	I0425 20:07:48.166771   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:48.185391   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:07:48.198250   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:07:48.209825   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:07:48.209843   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:07:48.209897   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:07:48.220854   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:07:48.220909   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:07:48.231518   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:07:48.241515   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:07:48.241589   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:07:48.251764   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.261762   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:07:48.261813   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.271952   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:07:48.281914   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:07:48.281986   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:07:48.292879   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:07:48.372322   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:07:48.372460   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:07:48.529730   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:07:48.529854   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:07:48.529979   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:07:48.753171   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:07:48.755473   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:07:48.755590   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:07:48.755692   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:07:48.755809   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:07:48.755905   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:07:48.756132   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:07:48.756317   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:07:48.756867   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:07:48.757498   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:07:48.758073   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:07:48.758581   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:07:48.758745   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:07:48.758842   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:07:48.894873   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:07:48.946907   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:07:49.084938   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:07:49.201925   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:07:49.219675   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:07:49.220891   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:07:49.220951   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:07:49.387310   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:07:46.917886   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:48.919793   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:46.324627   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:46.324653   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:46.382068   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:46.382102   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.424672   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:46.424709   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.466659   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:46.466692   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:46.484868   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:46.484898   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:46.614688   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:46.614720   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:46.666805   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:46.666846   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:47.098854   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:47.098899   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:49.653042   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:07:49.657843   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:07:49.659251   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:07:49.659285   72304 api_server.go:131] duration metric: took 4.061499319s to wait for apiserver health ...
	I0425 20:07:49.659295   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:07:49.659321   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:49.659380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:49.709699   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:49.709721   72304 cri.go:89] found id: ""
	I0425 20:07:49.709729   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:49.709795   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.715369   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:49.715429   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:49.773517   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:49.773544   72304 cri.go:89] found id: ""
	I0425 20:07:49.773554   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:49.773617   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.778984   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:49.779071   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:49.825707   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:49.825739   72304 cri.go:89] found id: ""
	I0425 20:07:49.825746   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:49.825790   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.830613   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:49.830678   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:49.872068   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:49.872094   72304 cri.go:89] found id: ""
	I0425 20:07:49.872104   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:49.872166   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.877311   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:49.877383   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:49.930182   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:49.930216   72304 cri.go:89] found id: ""
	I0425 20:07:49.930228   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:49.930283   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.935415   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:49.935484   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:49.985377   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:49.985404   72304 cri.go:89] found id: ""
	I0425 20:07:49.985412   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:49.985469   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.991021   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:49.991092   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:50.037755   72304 cri.go:89] found id: ""
	I0425 20:07:50.037787   72304 logs.go:276] 0 containers: []
	W0425 20:07:50.037802   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:50.037811   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:50.037875   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:50.083706   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.083731   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.083735   72304 cri.go:89] found id: ""
	I0425 20:07:50.083742   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:50.083793   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.088730   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.094339   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:50.094371   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:50.161538   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:50.161573   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.204178   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:50.204211   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.251315   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:50.251344   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:50.315859   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:50.315886   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:50.367787   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:50.367829   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:50.429509   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:50.429541   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:50.488723   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:50.488759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:50.506838   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:50.506879   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:50.629496   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:50.629526   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:50.689286   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:50.689321   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:50.731343   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:50.731373   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:50.772085   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:50.772114   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:49.389887   72712 out.go:204]   - Booting up control plane ...
	I0425 20:07:49.390011   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:07:49.395060   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:07:49.398108   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:07:49.398220   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:07:49.402596   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:07:53.651817   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:07:53.651845   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.651850   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.651854   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.651859   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.651862   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.651865   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.651872   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.651878   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.651885   72304 system_pods.go:74] duration metric: took 3.992584481s to wait for pod list to return data ...
	I0425 20:07:53.651892   72304 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:07:53.654617   72304 default_sa.go:45] found service account: "default"
	I0425 20:07:53.654641   72304 default_sa.go:55] duration metric: took 2.742232ms for default service account to be created ...
	I0425 20:07:53.654649   72304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:07:53.660082   72304 system_pods.go:86] 8 kube-system pods found
	I0425 20:07:53.660110   72304 system_pods.go:89] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.660116   72304 system_pods.go:89] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.660121   72304 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.660127   72304 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.660131   72304 system_pods.go:89] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.660135   72304 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.660142   72304 system_pods.go:89] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.660148   72304 system_pods.go:89] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.660154   72304 system_pods.go:126] duration metric: took 5.50043ms to wait for k8s-apps to be running ...
	I0425 20:07:53.660161   72304 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:07:53.660201   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:53.677461   72304 system_svc.go:56] duration metric: took 17.289854ms WaitForService to wait for kubelet
	I0425 20:07:53.677499   72304 kubeadm.go:576] duration metric: took 4m24.146512306s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:07:53.677524   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:07:53.681527   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:07:53.681562   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:07:53.681576   72304 node_conditions.go:105] duration metric: took 4.045221ms to run NodePressure ...
	I0425 20:07:53.681591   72304 start.go:240] waiting for startup goroutines ...
	I0425 20:07:53.681605   72304 start.go:245] waiting for cluster config update ...
	I0425 20:07:53.681622   72304 start.go:254] writing updated cluster config ...
	I0425 20:07:53.682002   72304 ssh_runner.go:195] Run: rm -f paused
	I0425 20:07:53.732056   72304 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:07:53.734302   72304 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142196" cluster and "default" namespace by default
	I0425 20:07:51.419808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:53.916090   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:55.917139   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:58.417609   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:00.917152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:02.918628   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.419508   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.765908   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.899694836s)
	I0425 20:08:05.765989   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:05.787711   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:08:05.801717   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:08:05.813710   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:08:05.813741   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:08:05.813802   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:08:05.825122   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:08:05.825202   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:08:05.837118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:08:05.848807   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:08:05.848880   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:08:05.862028   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.873795   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:08:05.873919   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.885577   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:08:05.897605   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:08:05.897685   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:08:05.909284   72220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:08:05.965574   72220 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 20:08:05.965663   72220 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:08:06.133359   72220 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:08:06.133525   72220 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:08:06.133675   72220 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:08:06.391437   72220 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:08:06.393805   72220 out.go:204]   - Generating certificates and keys ...
	I0425 20:08:06.393905   72220 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:08:06.393994   72220 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:08:06.394121   72220 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:08:06.394237   72220 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:08:06.394332   72220 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:08:06.394417   72220 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:08:06.394514   72220 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:08:06.396093   72220 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:08:06.396202   72220 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:08:06.396300   72220 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:08:06.396358   72220 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:08:06.396423   72220 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:08:06.683452   72220 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:08:06.778456   72220 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 20:08:06.923709   72220 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:08:07.079685   72220 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:08:07.170533   72220 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:08:07.171070   72220 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:08:07.173798   72220 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:08:07.175699   72220 out.go:204]   - Booting up control plane ...
	I0425 20:08:07.175824   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:08:07.175924   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:08:07.176060   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:08:07.197685   72220 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:08:07.200579   72220 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:08:07.200645   72220 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:08:07.354665   72220 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 20:08:07.354779   72220 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 20:08:07.855900   72220 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.56346ms
	I0425 20:08:07.856015   72220 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 20:08:07.423114   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:09.425115   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:13.358654   72220 kubeadm.go:309] [api-check] The API server is healthy after 5.502458238s
	I0425 20:08:13.388381   72220 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 20:08:13.908867   72220 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 20:08:13.945417   72220 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 20:08:13.945708   72220 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-744552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 20:08:13.959901   72220 kubeadm.go:309] [bootstrap-token] Using token: r2mxoe.iuelddsr8gvoq1wo
	I0425 20:08:13.961409   72220 out.go:204]   - Configuring RBAC rules ...
	I0425 20:08:13.961552   72220 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 20:08:13.970435   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 20:08:13.978933   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 20:08:13.982503   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 20:08:13.987029   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 20:08:13.990969   72220 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 20:08:14.103051   72220 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 20:08:14.554715   72220 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 20:08:15.105951   72220 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 20:08:15.107134   72220 kubeadm.go:309] 
	I0425 20:08:15.107222   72220 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 20:08:15.107236   72220 kubeadm.go:309] 
	I0425 20:08:15.107336   72220 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 20:08:15.107349   72220 kubeadm.go:309] 
	I0425 20:08:15.107379   72220 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 20:08:15.107463   72220 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 20:08:15.107550   72220 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 20:08:15.107560   72220 kubeadm.go:309] 
	I0425 20:08:15.107657   72220 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 20:08:15.107668   72220 kubeadm.go:309] 
	I0425 20:08:15.107735   72220 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 20:08:15.107747   72220 kubeadm.go:309] 
	I0425 20:08:15.107807   72220 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 20:08:15.107935   72220 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 20:08:15.108030   72220 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 20:08:15.108042   72220 kubeadm.go:309] 
	I0425 20:08:15.108154   72220 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 20:08:15.108269   72220 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 20:08:15.108280   72220 kubeadm.go:309] 
	I0425 20:08:15.108395   72220 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.108556   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed \
	I0425 20:08:15.108594   72220 kubeadm.go:309] 	--control-plane 
	I0425 20:08:15.108603   72220 kubeadm.go:309] 
	I0425 20:08:15.108719   72220 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 20:08:15.108730   72220 kubeadm.go:309] 
	I0425 20:08:15.108849   72220 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.109004   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed 
	I0425 20:08:15.109717   72220 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:08:15.109778   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:08:15.109797   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:08:15.111712   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:08:11.918414   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:14.420753   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:15.113288   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:08:15.129693   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:08:15.157631   72220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:08:15.157709   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.157760   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-744552 minikube.k8s.io/updated_at=2024_04_25T20_08_15_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=no-preload-744552 minikube.k8s.io/primary=true
	I0425 20:08:15.374198   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.418592   72220 ops.go:34] apiserver oom_adj: -16
	I0425 20:08:15.874721   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.374969   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.875091   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.375038   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.874685   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:18.374802   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.917617   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:19.421721   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:18.874931   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.374961   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.874349   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.374787   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.875130   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.374959   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.874325   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.374798   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.875034   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:23.374899   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.917898   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:22.917132   71966 pod_ready.go:81] duration metric: took 4m0.007062693s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:08:22.917156   71966 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:08:22.917164   71966 pod_ready.go:38] duration metric: took 4m4.548150095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:22.917179   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:22.917211   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:22.917270   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:22.982604   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:22.982631   71966 cri.go:89] found id: ""
	I0425 20:08:22.982640   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:22.982698   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:22.988558   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:22.988618   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:23.031937   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.031964   71966 cri.go:89] found id: ""
	I0425 20:08:23.031973   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:23.032031   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.037315   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:23.037371   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:23.089839   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.089862   71966 cri.go:89] found id: ""
	I0425 20:08:23.089872   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:23.089936   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.095247   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:23.095309   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:23.136257   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.136286   71966 cri.go:89] found id: ""
	I0425 20:08:23.136294   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:23.136357   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.142548   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:23.142608   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:23.186190   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.186229   71966 cri.go:89] found id: ""
	I0425 20:08:23.186239   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:23.186301   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.191422   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:23.191494   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:23.242326   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.242361   71966 cri.go:89] found id: ""
	I0425 20:08:23.242371   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:23.242437   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.248578   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:23.248642   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:23.286781   71966 cri.go:89] found id: ""
	I0425 20:08:23.286807   71966 logs.go:276] 0 containers: []
	W0425 20:08:23.286817   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:23.286823   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:23.286885   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:23.334728   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:23.334754   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.334761   71966 cri.go:89] found id: ""
	I0425 20:08:23.334770   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:23.334831   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.340288   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.344787   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:23.344808   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:23.401830   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:23.401865   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:23.425683   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:23.425715   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:23.568527   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:23.568558   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.608747   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:23.608776   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.647962   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:23.647996   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.687270   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:23.687308   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:23.745081   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:23.745112   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:23.799375   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:23.799405   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.853199   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:23.853232   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.896535   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:23.896571   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.964317   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:23.964350   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:24.013196   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:24.013231   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:23.874275   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.374250   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.874396   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.374767   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.874968   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.374333   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.874916   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.374369   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.499044   72220 kubeadm.go:1107] duration metric: took 12.341393953s to wait for elevateKubeSystemPrivileges
	W0425 20:08:27.499078   72220 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 20:08:27.499087   72220 kubeadm.go:393] duration metric: took 5m17.572541498s to StartCluster
	I0425 20:08:27.499108   72220 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.499189   72220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:08:27.500940   72220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.501192   72220 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:08:27.503257   72220 out.go:177] * Verifying Kubernetes components...
	I0425 20:08:27.501308   72220 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:08:27.501405   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:08:27.505389   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:08:27.505403   72220 addons.go:69] Setting storage-provisioner=true in profile "no-preload-744552"
	I0425 20:08:27.505438   72220 addons.go:234] Setting addon storage-provisioner=true in "no-preload-744552"
	W0425 20:08:27.505453   72220 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:08:27.505490   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505505   72220 addons.go:69] Setting metrics-server=true in profile "no-preload-744552"
	I0425 20:08:27.505535   72220 addons.go:234] Setting addon metrics-server=true in "no-preload-744552"
	W0425 20:08:27.505546   72220 addons.go:243] addon metrics-server should already be in state true
	I0425 20:08:27.505574   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505895   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.505922   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.505492   72220 addons.go:69] Setting default-storageclass=true in profile "no-preload-744552"
	I0425 20:08:27.505990   72220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-744552"
	I0425 20:08:27.505952   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506099   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.506418   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506467   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.523666   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0425 20:08:27.526950   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0425 20:08:27.526972   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.526981   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I0425 20:08:27.527536   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527606   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527662   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.527683   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528039   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528059   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528122   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528228   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528242   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528601   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528644   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528712   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.528735   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.528800   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.529228   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.529246   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.532151   72220 addons.go:234] Setting addon default-storageclass=true in "no-preload-744552"
	W0425 20:08:27.532171   72220 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:08:27.532204   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.532543   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.532582   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.547165   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0425 20:08:27.547700   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.548354   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.548368   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.548675   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.548793   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.550640   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.554301   72220 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:08:27.553061   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0425 20:08:27.553099   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0425 20:08:27.555613   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:08:27.555630   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:08:27.555652   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.556177   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556181   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556724   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556739   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.556868   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556879   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.557128   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.557700   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.557729   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.558142   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.558406   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.559420   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.559990   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.560057   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.560076   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.560177   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.560333   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.560549   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.560967   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.562839   72220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:08:27.564442   72220 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.564480   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:08:27.564517   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.567912   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.568153   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.568171   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.570321   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.570514   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.570709   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.570945   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.578396   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
	I0425 20:08:27.586629   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.587070   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.587082   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.587584   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.587736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.589708   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.589937   72220 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.589948   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:08:27.589961   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.592640   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.592983   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.593007   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.593261   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.593541   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.593736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.593906   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.783858   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:08:27.820917   72220 node_ready.go:35] waiting up to 6m0s for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832349   72220 node_ready.go:49] node "no-preload-744552" has status "Ready":"True"
	I0425 20:08:27.832377   72220 node_ready.go:38] duration metric: took 11.423909ms for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832390   72220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:27.844475   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:27.886461   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:08:27.886483   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:08:27.899413   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.931511   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.935073   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:08:27.935098   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:08:27.989052   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:27.989082   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:08:28.016326   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:28.551863   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551894   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.551964   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551976   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552255   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552280   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552292   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552315   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552358   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.552397   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552405   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552414   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552421   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552571   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552597   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552710   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552736   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.578416   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.578445   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.578730   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.578776   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.578789   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.945831   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.945861   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946170   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946191   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946214   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.946224   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946531   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946549   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946560   72220 addons.go:470] Verifying addon metrics-server=true in "no-preload-744552"
	I0425 20:08:28.946570   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.948485   72220 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:08:27.005360   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:27.024856   71966 api_server.go:72] duration metric: took 4m14.401244231s to wait for apiserver process to appear ...
	I0425 20:08:27.024881   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:27.024922   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:27.024982   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:27.072098   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:27.072129   71966 cri.go:89] found id: ""
	I0425 20:08:27.072140   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:27.072210   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.077726   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:27.077793   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:27.118834   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:27.118855   71966 cri.go:89] found id: ""
	I0425 20:08:27.118864   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:27.118917   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.125277   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:27.125347   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:27.167036   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.167064   71966 cri.go:89] found id: ""
	I0425 20:08:27.167074   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:27.167131   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.172390   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:27.172468   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:27.212933   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:27.212957   71966 cri.go:89] found id: ""
	I0425 20:08:27.212967   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:27.213022   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.218033   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:27.218083   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:27.259294   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:27.259321   71966 cri.go:89] found id: ""
	I0425 20:08:27.259331   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:27.259384   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.265537   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:27.265610   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:27.312145   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:27.312174   71966 cri.go:89] found id: ""
	I0425 20:08:27.312183   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:27.312240   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.318346   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:27.318405   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:27.362467   71966 cri.go:89] found id: ""
	I0425 20:08:27.362495   71966 logs.go:276] 0 containers: []
	W0425 20:08:27.362504   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:27.362509   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:27.362569   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:27.406810   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:27.406834   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.406839   71966 cri.go:89] found id: ""
	I0425 20:08:27.406846   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:27.406903   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.412431   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.421695   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:27.421725   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.472832   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:27.472863   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.535799   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:27.535830   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:28.004964   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:28.005006   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:28.072378   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:28.072417   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:28.236479   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:28.236523   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:28.296095   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:28.296133   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:28.351290   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:28.351314   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:28.400529   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:28.400567   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:28.459149   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:28.459178   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:28.507818   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:28.507844   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:28.565596   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:28.565627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:28.588509   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:28.588535   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:29.403321   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:08:29.403717   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:29.404001   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:28.950127   72220 addons.go:505] duration metric: took 1.448816058s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:08:29.862142   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:30.851653   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.851677   72220 pod_ready.go:81] duration metric: took 3.007171918s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.851689   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857090   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.857108   72220 pod_ready.go:81] duration metric: took 5.412841ms for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857117   72220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863315   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.863331   72220 pod_ready.go:81] duration metric: took 6.207835ms for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863339   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867557   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.867579   72220 pod_ready.go:81] duration metric: took 4.23311ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867590   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872391   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.872407   72220 pod_ready.go:81] duration metric: took 4.810397ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872415   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249226   72220 pod_ready.go:92] pod "kube-proxy-22w7x" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.249259   72220 pod_ready.go:81] duration metric: took 376.837327ms for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249284   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649908   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.649934   72220 pod_ready.go:81] duration metric: took 400.641991ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649945   72220 pod_ready.go:38] duration metric: took 3.817541056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:31.649962   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:31.650025   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:31.684094   72220 api_server.go:72] duration metric: took 4.182865357s to wait for apiserver process to appear ...
	I0425 20:08:31.684123   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:31.684146   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:08:31.689688   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:08:31.690939   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.690963   72220 api_server.go:131] duration metric: took 6.831773ms to wait for apiserver health ...
	I0425 20:08:31.690973   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.853816   72220 system_pods.go:59] 9 kube-system pods found
	I0425 20:08:31.853849   72220 system_pods.go:61] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:31.853856   72220 system_pods.go:61] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:31.853861   72220 system_pods.go:61] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:31.853868   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:31.853872   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:31.853877   72220 system_pods.go:61] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:31.853881   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:31.853889   72220 system_pods.go:61] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:31.853894   72220 system_pods.go:61] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:31.853907   72220 system_pods.go:74] duration metric: took 162.928561ms to wait for pod list to return data ...
	I0425 20:08:31.853916   72220 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:32.049906   72220 default_sa.go:45] found service account: "default"
	I0425 20:08:32.049932   72220 default_sa.go:55] duration metric: took 196.003422ms for default service account to be created ...
	I0425 20:08:32.049942   72220 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:32.255245   72220 system_pods.go:86] 9 kube-system pods found
	I0425 20:08:32.255290   72220 system_pods.go:89] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:32.255298   72220 system_pods.go:89] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:32.255304   72220 system_pods.go:89] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:32.255311   72220 system_pods.go:89] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:32.255317   72220 system_pods.go:89] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:32.255322   72220 system_pods.go:89] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:32.255328   72220 system_pods.go:89] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:32.255338   72220 system_pods.go:89] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:32.255348   72220 system_pods.go:89] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:32.255368   72220 system_pods.go:126] duration metric: took 205.41905ms to wait for k8s-apps to be running ...
	I0425 20:08:32.255378   72220 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:32.255429   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:32.274141   72220 system_svc.go:56] duration metric: took 18.75721ms WaitForService to wait for kubelet
	I0425 20:08:32.274173   72220 kubeadm.go:576] duration metric: took 4.77294686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:32.274198   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:32.449699   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:32.449727   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:32.449741   72220 node_conditions.go:105] duration metric: took 175.536406ms to run NodePressure ...
	I0425 20:08:32.449755   72220 start.go:240] waiting for startup goroutines ...
	I0425 20:08:32.449765   72220 start.go:245] waiting for cluster config update ...
	I0425 20:08:32.449778   72220 start.go:254] writing updated cluster config ...
	I0425 20:08:32.450108   72220 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:32.503317   72220 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:32.505391   72220 out.go:177] * Done! kubectl is now configured to use "no-preload-744552" cluster and "default" namespace by default
	I0425 20:08:31.153636   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:08:31.158526   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:08:31.159775   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.159817   71966 api_server.go:131] duration metric: took 4.134911832s to wait for apiserver health ...
	I0425 20:08:31.159827   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.159847   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:31.159890   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:31.201597   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:31.201616   71966 cri.go:89] found id: ""
	I0425 20:08:31.201625   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:31.201667   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.206973   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:31.207039   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:31.248400   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:31.248424   71966 cri.go:89] found id: ""
	I0425 20:08:31.248435   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:31.248496   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.253822   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:31.253879   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:31.298921   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:31.298946   71966 cri.go:89] found id: ""
	I0425 20:08:31.298956   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:31.299003   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.304691   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:31.304758   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:31.351773   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:31.351796   71966 cri.go:89] found id: ""
	I0425 20:08:31.351804   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:31.351851   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.356599   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:31.356651   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:31.399655   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:31.399678   71966 cri.go:89] found id: ""
	I0425 20:08:31.399686   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:31.399740   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.405103   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:31.405154   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:31.452763   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:31.452785   71966 cri.go:89] found id: ""
	I0425 20:08:31.452794   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:31.452840   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.457788   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:31.457838   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:31.503746   71966 cri.go:89] found id: ""
	I0425 20:08:31.503780   71966 logs.go:276] 0 containers: []
	W0425 20:08:31.503791   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:31.503798   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:31.503868   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:31.548517   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:31.548543   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:31.548555   71966 cri.go:89] found id: ""
	I0425 20:08:31.548565   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:31.548631   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.553673   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.558271   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:31.558290   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:31.974349   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:31.974387   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:32.033292   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:32.033327   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:32.050762   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:32.050791   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:32.101591   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:32.101627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:32.142626   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:32.142652   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:32.203270   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:32.203315   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:32.247021   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:32.247048   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:32.294900   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:32.294936   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:32.353902   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:32.353934   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:32.488543   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:32.488584   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:32.569303   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:32.569358   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:32.622767   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:32.622802   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:35.181779   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:08:35.181813   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.181820   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.181826   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.181832   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.181837   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.181843   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.181851   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.181858   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.181867   71966 system_pods.go:74] duration metric: took 4.022033823s to wait for pod list to return data ...
	I0425 20:08:35.181879   71966 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:35.185387   71966 default_sa.go:45] found service account: "default"
	I0425 20:08:35.185413   71966 default_sa.go:55] duration metric: took 3.523751ms for default service account to be created ...
	I0425 20:08:35.185423   71966 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:35.195075   71966 system_pods.go:86] 8 kube-system pods found
	I0425 20:08:35.195099   71966 system_pods.go:89] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.195104   71966 system_pods.go:89] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.195109   71966 system_pods.go:89] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.195114   71966 system_pods.go:89] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.195118   71966 system_pods.go:89] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.195122   71966 system_pods.go:89] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.195128   71966 system_pods.go:89] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.195133   71966 system_pods.go:89] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.195139   71966 system_pods.go:126] duration metric: took 9.711803ms to wait for k8s-apps to be running ...
	I0425 20:08:35.195155   71966 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:35.195195   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:35.213494   71966 system_svc.go:56] duration metric: took 18.331225ms WaitForService to wait for kubelet
	I0425 20:08:35.213523   71966 kubeadm.go:576] duration metric: took 4m22.589912913s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:35.213545   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:35.216461   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:35.216481   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:35.216493   71966 node_conditions.go:105] duration metric: took 2.94061ms to run NodePressure ...
	I0425 20:08:35.216502   71966 start.go:240] waiting for startup goroutines ...
	I0425 20:08:35.216509   71966 start.go:245] waiting for cluster config update ...
	I0425 20:08:35.216518   71966 start.go:254] writing updated cluster config ...
	I0425 20:08:35.216750   71966 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:35.265836   71966 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:35.269026   71966 out.go:177] * Done! kubectl is now configured to use "embed-certs-512173" cluster and "default" namespace by default
	I0425 20:08:34.404410   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:34.404662   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:44.405293   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:44.405518   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:04.406406   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:04.406676   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.407969   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:44.408240   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.408259   72712 kubeadm.go:309] 
	I0425 20:09:44.408293   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:09:44.408355   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:09:44.408373   72712 kubeadm.go:309] 
	I0425 20:09:44.408417   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:09:44.408448   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:09:44.408562   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:09:44.408575   72712 kubeadm.go:309] 
	I0425 20:09:44.408655   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:09:44.408684   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:09:44.408711   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:09:44.408718   72712 kubeadm.go:309] 
	I0425 20:09:44.408812   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:09:44.408912   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:09:44.408939   72712 kubeadm.go:309] 
	I0425 20:09:44.409085   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:09:44.409217   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:09:44.409341   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:09:44.409418   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:09:44.409433   72712 kubeadm.go:309] 
	I0425 20:09:44.410319   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:09:44.410423   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:09:44.410510   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0425 20:09:44.410640   72712 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0425 20:09:44.410700   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:09:45.395830   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:09:45.412628   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:09:45.423387   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:09:45.423412   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:09:45.423465   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:09:45.434317   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:09:45.434389   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:09:45.445657   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:09:45.455698   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:09:45.455772   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:09:45.466137   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.476140   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:09:45.476192   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.486410   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:09:45.495465   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:09:45.495522   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:09:45.505410   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:09:45.726416   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:11:42.214574   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:11:42.214715   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0425 20:11:42.216323   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:11:42.216393   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:11:42.216507   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:11:42.216650   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:11:42.216795   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:11:42.216882   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:11:42.218766   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:11:42.218847   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:11:42.218923   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:11:42.219042   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:11:42.219103   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:11:42.219167   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:11:42.219237   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:11:42.219321   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:11:42.219407   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:11:42.219519   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:11:42.219639   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:11:42.219694   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:11:42.219742   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:11:42.219786   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:11:42.219831   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:11:42.219883   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:11:42.219929   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:11:42.220029   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:11:42.220139   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:11:42.220204   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:11:42.220308   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:11:42.222891   72712 out.go:204]   - Booting up control plane ...
	I0425 20:11:42.222979   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:11:42.223054   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:11:42.223129   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:11:42.223222   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:11:42.223404   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:11:42.223459   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:11:42.223565   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.223835   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.223937   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224165   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224243   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224457   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224541   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224799   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224902   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.225125   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.225134   72712 kubeadm.go:309] 
	I0425 20:11:42.225166   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:11:42.225204   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:11:42.225210   72712 kubeadm.go:309] 
	I0425 20:11:42.225239   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:11:42.225267   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:11:42.225352   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:11:42.225358   72712 kubeadm.go:309] 
	I0425 20:11:42.225446   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:11:42.225476   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:11:42.225522   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:11:42.225533   72712 kubeadm.go:309] 
	I0425 20:11:42.225626   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:11:42.225714   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:11:42.225729   72712 kubeadm.go:309] 
	I0425 20:11:42.225875   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:11:42.225951   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:11:42.226022   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:11:42.226096   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:11:42.226129   72712 kubeadm.go:309] 
	I0425 20:11:42.226162   72712 kubeadm.go:393] duration metric: took 8m0.122692927s to StartCluster
	I0425 20:11:42.226242   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:11:42.226299   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:11:42.283295   72712 cri.go:89] found id: ""
	I0425 20:11:42.283320   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.283329   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:11:42.283335   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:11:42.283389   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:11:42.322462   72712 cri.go:89] found id: ""
	I0425 20:11:42.322493   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.322505   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:11:42.322512   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:11:42.322574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:11:42.372329   72712 cri.go:89] found id: ""
	I0425 20:11:42.372355   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.372363   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:11:42.372369   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:11:42.372416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:11:42.420348   72712 cri.go:89] found id: ""
	I0425 20:11:42.420374   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.420382   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:11:42.420389   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:11:42.420447   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:11:42.460274   72712 cri.go:89] found id: ""
	I0425 20:11:42.460317   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.460329   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:11:42.460337   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:11:42.460395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:11:42.503828   72712 cri.go:89] found id: ""
	I0425 20:11:42.503855   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.503867   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:11:42.503874   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:11:42.503933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:11:42.545045   72712 cri.go:89] found id: ""
	I0425 20:11:42.545070   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.545086   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:11:42.545095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:11:42.545156   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:11:42.586389   72712 cri.go:89] found id: ""
	I0425 20:11:42.586413   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.586421   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:11:42.586429   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:11:42.586440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:11:42.602835   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:11:42.602863   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:11:42.695131   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:11:42.695153   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:11:42.695168   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:11:42.819889   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:11:42.819922   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:11:42.869446   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:11:42.869474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0425 20:11:42.927184   72712 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0425 20:11:42.927236   72712 out.go:239] * 
	W0425 20:11:42.927291   72712 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.927311   72712 out.go:239] * 
	W0425 20:11:42.928275   72712 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 20:11:42.931353   72712 out.go:177] 
	W0425 20:11:42.932654   72712 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.932696   72712 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0425 20:11:42.932713   72712 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0425 20:11:42.934227   72712 out.go:177] 
	
	
	==> CRI-O <==
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.367186670Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076448367162842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed8050ce-faa0-49a4-89cc-c14848a98bae name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.367807821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c351ac3-4b70-4388-828d-5344bb418201 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.367894351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c351ac3-4b70-4388-828d-5344bb418201 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.367935805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6c351ac3-4b70-4388-828d-5344bb418201 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.405240534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b37e770-3852-4efc-ad8f-0752d620d55a name=/runtime.v1.RuntimeService/Version
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.405361023Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b37e770-3852-4efc-ad8f-0752d620d55a name=/runtime.v1.RuntimeService/Version
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.410264624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3254e108-f9ed-4eda-bb8c-afc12440244b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.410805317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076448410764436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3254e108-f9ed-4eda-bb8c-afc12440244b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.411491661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a51156e0-b413-4c8b-8001-9ea951f46a60 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.411566747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a51156e0-b413-4c8b-8001-9ea951f46a60 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.411613630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a51156e0-b413-4c8b-8001-9ea951f46a60 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.447278724Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d12fbd7b-7458-4dbc-8b3c-9d178fdc8f8c name=/runtime.v1.RuntimeService/Version
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.447385695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d12fbd7b-7458-4dbc-8b3c-9d178fdc8f8c name=/runtime.v1.RuntimeService/Version
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.449199831Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e127b30-58b6-4041-a4ee-237dda57927e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.449594921Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076448449572676,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e127b30-58b6-4041-a4ee-237dda57927e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.450394465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4b26e73-091e-4811-bb85-b8a6b0e723b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.450452834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4b26e73-091e-4811-bb85-b8a6b0e723b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.450483849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b4b26e73-091e-4811-bb85-b8a6b0e723b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.485248452Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3e4d1bd-04c8-40d9-b850-32bfc38d50e1 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.485344909Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3e4d1bd-04c8-40d9-b850-32bfc38d50e1 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.488350414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a56823e-0a9b-4cc8-bbcc-a48758489ed6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.488879441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076448488840033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a56823e-0a9b-4cc8-bbcc-a48758489ed6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.489880783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=648a98e1-63b4-4316-bed3-8826620fa5e6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.489959800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=648a98e1-63b4-4316-bed3-8826620fa5e6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:20:48 old-k8s-version-210442 crio[650]: time="2024-04-25 20:20:48.490006013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=648a98e1-63b4-4316-bed3-8826620fa5e6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr25 20:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063840] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050603] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.017598] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.598719] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.716084] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.653602] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.065627] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084851] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.203835] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.167647] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.363402] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.835292] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.069736] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.981211] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[ +11.947575] kauditd_printk_skb: 46 callbacks suppressed
	[Apr25 20:07] systemd-fstab-generator[4988]: Ignoring "noauto" option for root device
	[Apr25 20:09] systemd-fstab-generator[5273]: Ignoring "noauto" option for root device
	[  +0.069773] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:20:48 up 17 min,  0 users,  load average: 0.00, 0.02, 0.06
	Linux old-k8s-version-210442 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0007aada0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009a55f0, 0x24, 0x60, 0x7fdc682d0408, 0x118, ...)
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]: net/http.(*Transport).dial(0xc000a3c000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009a55f0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]: net/http.(*Transport).dialConn(0xc000a3c000, 0x4f7fe00, 0xc000120018, 0x0, 0xc000390540, 0x5, 0xc0009a55f0, 0x24, 0x0, 0xc0009de7e0, ...)
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]: net/http.(*Transport).dialConnFor(0xc000a3c000, 0xc0009353f0)
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]: created by net/http.(*Transport).queueForDial
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]: goroutine 167 [select]:
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]: net.(*netFD).connect.func2(0x4f7fe40, 0xc0002f0060, 0xc00004db00, 0xc0009fc360, 0xc0009fc300)
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]: created by net.(*netFD).connect
	Apr 25 20:20:43 old-k8s-version-210442 kubelet[6448]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Apr 25 20:20:43 old-k8s-version-210442 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 25 20:20:43 old-k8s-version-210442 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 25 20:20:43 old-k8s-version-210442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 113.
	Apr 25 20:20:43 old-k8s-version-210442 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 25 20:20:43 old-k8s-version-210442 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 25 20:20:44 old-k8s-version-210442 kubelet[6457]: I0425 20:20:44.082149    6457 server.go:416] Version: v1.20.0
	Apr 25 20:20:44 old-k8s-version-210442 kubelet[6457]: I0425 20:20:44.082513    6457 server.go:837] Client rotation is on, will bootstrap in background
	Apr 25 20:20:44 old-k8s-version-210442 kubelet[6457]: I0425 20:20:44.084912    6457 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 25 20:20:44 old-k8s-version-210442 kubelet[6457]: W0425 20:20:44.085939    6457 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 25 20:20:44 old-k8s-version-210442 kubelet[6457]: I0425 20:20:44.086208    6457 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210442 -n old-k8s-version-210442
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 2 (241.917017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-210442" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (425.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-25 20:24:01.400100283 +0000 UTC m=+6767.809034843
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-142196 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-142196 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.782µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-142196 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-142196 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-142196 logs -n 25: (1.553684096s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-512173            | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-744552             | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142196  | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210442        | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-512173                 | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-744552                  | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142196       | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:07 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210442             | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 20:22 UTC | 25 Apr 24 20:22 UTC |
	| start   | -p newest-cni-366100 --memory=2200 --alsologtostderr   | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:22 UTC | 25 Apr 24 20:23 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-366100             | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC | 25 Apr 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-366100                                   | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC | 25 Apr 24 20:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC | 25 Apr 24 20:23 UTC |
	| addons  | enable dashboard -p newest-cni-366100                  | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC | 25 Apr 24 20:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-366100 --memory=2200 --alsologtostderr   | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC | 25 Apr 24 20:24 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 20:23:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 20:23:34.622885   80072 out.go:291] Setting OutFile to fd 1 ...
	I0425 20:23:34.623014   80072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 20:23:34.623024   80072 out.go:304] Setting ErrFile to fd 2...
	I0425 20:23:34.623029   80072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 20:23:34.623199   80072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 20:23:34.623718   80072 out.go:298] Setting JSON to false
	I0425 20:23:34.624605   80072 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7561,"bootTime":1714069054,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 20:23:34.624663   80072 start.go:139] virtualization: kvm guest
	I0425 20:23:34.627084   80072 out.go:177] * [newest-cni-366100] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 20:23:34.628380   80072 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 20:23:34.628457   80072 notify.go:220] Checking for updates...
	I0425 20:23:34.629591   80072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 20:23:34.630992   80072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:23:34.632190   80072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 20:23:34.633572   80072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 20:23:34.634935   80072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 20:23:34.636798   80072 config.go:182] Loaded profile config "newest-cni-366100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:23:34.637395   80072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:23:34.637480   80072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:23:34.652537   80072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I0425 20:23:34.653008   80072 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:23:34.653497   80072 main.go:141] libmachine: Using API Version  1
	I0425 20:23:34.653521   80072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:23:34.653900   80072 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:23:34.654053   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:34.654320   80072 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 20:23:34.654678   80072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:23:34.654723   80072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:23:34.669855   80072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
	I0425 20:23:34.670354   80072 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:23:34.670850   80072 main.go:141] libmachine: Using API Version  1
	I0425 20:23:34.670876   80072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:23:34.671176   80072 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:23:34.671352   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:34.708419   80072 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 20:23:34.709628   80072 start.go:297] selected driver: kvm2
	I0425 20:23:34.709642   80072 start.go:901] validating driver "kvm2" against &{Name:newest-cni-366100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:newest-cni-366100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:23:34.709765   80072 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 20:23:34.710456   80072 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 20:23:34.710517   80072 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 20:23:34.725234   80072 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 20:23:34.725701   80072 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0425 20:23:34.725784   80072 cni.go:84] Creating CNI manager for ""
	I0425 20:23:34.725802   80072 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:23:34.725853   80072 start.go:340] cluster config:
	{Name:newest-cni-366100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-366100 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:23:34.725960   80072 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 20:23:34.727617   80072 out.go:177] * Starting "newest-cni-366100" primary control-plane node in "newest-cni-366100" cluster
	I0425 20:23:34.728921   80072 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:23:34.728971   80072 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 20:23:34.728991   80072 cache.go:56] Caching tarball of preloaded images
	I0425 20:23:34.729086   80072 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 20:23:34.729106   80072 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 20:23:34.729234   80072 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/config.json ...
	I0425 20:23:34.729426   80072 start.go:360] acquireMachinesLock for newest-cni-366100: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 20:23:34.729467   80072 start.go:364] duration metric: took 23.287µs to acquireMachinesLock for "newest-cni-366100"
	I0425 20:23:34.729478   80072 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:23:34.729482   80072 fix.go:54] fixHost starting: 
	I0425 20:23:34.729722   80072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:23:34.729752   80072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:23:34.744430   80072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
	I0425 20:23:34.744910   80072 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:23:34.745384   80072 main.go:141] libmachine: Using API Version  1
	I0425 20:23:34.745417   80072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:23:34.745812   80072 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:23:34.745973   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:34.746108   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetState
	I0425 20:23:34.747691   80072 fix.go:112] recreateIfNeeded on newest-cni-366100: state=Stopped err=<nil>
	I0425 20:23:34.747715   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	W0425 20:23:34.747877   80072 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:23:34.749786   80072 out.go:177] * Restarting existing kvm2 VM for "newest-cni-366100" ...
	I0425 20:23:34.751073   80072 main.go:141] libmachine: (newest-cni-366100) Calling .Start
	I0425 20:23:34.751241   80072 main.go:141] libmachine: (newest-cni-366100) Ensuring networks are active...
	I0425 20:23:34.751997   80072 main.go:141] libmachine: (newest-cni-366100) Ensuring network default is active
	I0425 20:23:34.752404   80072 main.go:141] libmachine: (newest-cni-366100) Ensuring network mk-newest-cni-366100 is active
	I0425 20:23:34.752821   80072 main.go:141] libmachine: (newest-cni-366100) Getting domain xml...
	I0425 20:23:34.753623   80072 main.go:141] libmachine: (newest-cni-366100) Creating domain...
	I0425 20:23:35.996444   80072 main.go:141] libmachine: (newest-cni-366100) Waiting to get IP...
	I0425 20:23:35.997350   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:35.997842   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:35.997913   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:35.997804   80107 retry.go:31] will retry after 234.042053ms: waiting for machine to come up
	I0425 20:23:36.233193   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:36.233797   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:36.233857   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:36.233764   80107 retry.go:31] will retry after 349.383929ms: waiting for machine to come up
	I0425 20:23:36.584361   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:36.584917   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:36.584942   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:36.584884   80107 retry.go:31] will retry after 461.234598ms: waiting for machine to come up
	I0425 20:23:37.047383   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:37.047913   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:37.047943   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:37.047866   80107 retry.go:31] will retry after 538.387751ms: waiting for machine to come up
	I0425 20:23:37.588537   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:37.588987   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:37.589022   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:37.588944   80107 retry.go:31] will retry after 608.399222ms: waiting for machine to come up
	I0425 20:23:38.198714   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:38.199154   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:38.199177   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:38.199114   80107 retry.go:31] will retry after 877.686267ms: waiting for machine to come up
	I0425 20:23:39.078130   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:39.078606   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:39.078638   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:39.078554   80107 retry.go:31] will retry after 1.065414647s: waiting for machine to come up
	I0425 20:23:40.145266   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:40.145692   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:40.145735   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:40.145660   80107 retry.go:31] will retry after 1.028159381s: waiting for machine to come up
	I0425 20:23:41.175885   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:41.176331   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:41.176359   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:41.176268   80107 retry.go:31] will retry after 1.509700207s: waiting for machine to come up
	I0425 20:23:42.687455   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:42.687838   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:42.687870   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:42.687814   80107 retry.go:31] will retry after 1.661055477s: waiting for machine to come up
	I0425 20:23:44.351305   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:44.351851   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:44.351884   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:44.351780   80107 retry.go:31] will retry after 2.061790599s: waiting for machine to come up
	I0425 20:23:46.415486   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:46.416043   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:46.416081   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:46.415980   80107 retry.go:31] will retry after 3.087288552s: waiting for machine to come up
	I0425 20:23:49.507104   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:49.507505   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:49.507528   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:49.507474   80107 retry.go:31] will retry after 2.834636598s: waiting for machine to come up
	I0425 20:23:52.343340   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:52.343727   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:52.343761   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:52.343667   80107 retry.go:31] will retry after 5.650772362s: waiting for machine to come up
	I0425 20:23:57.996280   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:57.996821   80072 main.go:141] libmachine: (newest-cni-366100) Found IP for machine: 192.168.61.209
	I0425 20:23:57.996843   80072 main.go:141] libmachine: (newest-cni-366100) Reserving static IP address...
	I0425 20:23:57.996868   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has current primary IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:57.997252   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "newest-cni-366100", mac: "52:54:00:a7:4f:45", ip: "192.168.61.209"} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:57.997280   80072 main.go:141] libmachine: (newest-cni-366100) Reserved static IP address: 192.168.61.209
	I0425 20:23:57.997315   80072 main.go:141] libmachine: (newest-cni-366100) DBG | skip adding static IP to network mk-newest-cni-366100 - found existing host DHCP lease matching {name: "newest-cni-366100", mac: "52:54:00:a7:4f:45", ip: "192.168.61.209"}
	I0425 20:23:57.997338   80072 main.go:141] libmachine: (newest-cni-366100) DBG | Getting to WaitForSSH function...
	I0425 20:23:57.997350   80072 main.go:141] libmachine: (newest-cni-366100) Waiting for SSH to be available...
	I0425 20:23:58.000080   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.000406   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:58.000427   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.000639   80072 main.go:141] libmachine: (newest-cni-366100) DBG | Using SSH client type: external
	I0425 20:23:58.000682   80072 main.go:141] libmachine: (newest-cni-366100) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa (-rw-------)
	I0425 20:23:58.000722   80072 main.go:141] libmachine: (newest-cni-366100) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:23:58.000747   80072 main.go:141] libmachine: (newest-cni-366100) DBG | About to run SSH command:
	I0425 20:23:58.000774   80072 main.go:141] libmachine: (newest-cni-366100) DBG | exit 0
	I0425 20:23:58.135049   80072 main.go:141] libmachine: (newest-cni-366100) DBG | SSH cmd err, output: <nil>: 
	I0425 20:23:58.135327   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetConfigRaw
	I0425 20:23:58.135993   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetIP
	I0425 20:23:58.138938   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.139384   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:58.139414   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.139871   80072 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/config.json ...
	I0425 20:23:58.140057   80072 machine.go:94] provisionDockerMachine start ...
	I0425 20:23:58.140074   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:58.140265   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:58.143303   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.143837   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:58.143867   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.143944   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:23:58.144138   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:58.144305   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:58.144456   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:23:58.144660   80072 main.go:141] libmachine: Using SSH client type: native
	I0425 20:23:58.144915   80072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0425 20:23:58.144930   80072 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:23:58.255340   80072 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:23:58.255381   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetMachineName
	I0425 20:23:58.255654   80072 buildroot.go:166] provisioning hostname "newest-cni-366100"
	I0425 20:23:58.255684   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetMachineName
	I0425 20:23:58.255878   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:58.258682   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.259062   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:58.259084   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.259222   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:23:58.259410   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:58.259563   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:58.259788   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:23:58.259990   80072 main.go:141] libmachine: Using SSH client type: native
	I0425 20:23:58.260212   80072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0425 20:23:58.260230   80072 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-366100 && echo "newest-cni-366100" | sudo tee /etc/hostname
	I0425 20:23:58.386969   80072 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-366100
	
	I0425 20:23:58.387044   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:58.390235   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.390607   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:58.390654   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.390847   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:23:58.391063   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:58.391230   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:58.391403   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:23:58.391626   80072 main.go:141] libmachine: Using SSH client type: native
	I0425 20:23:58.391867   80072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0425 20:23:58.391893   80072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-366100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-366100/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-366100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:23:58.513802   80072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:23:58.513835   80072 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:23:58.513867   80072 buildroot.go:174] setting up certificates
	I0425 20:23:58.513877   80072 provision.go:84] configureAuth start
	I0425 20:23:58.513890   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetMachineName
	I0425 20:23:58.514223   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetIP
	I0425 20:23:58.516983   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.517454   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:58.517497   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.517714   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:58.520421   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.520812   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:58.520846   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.520995   80072 provision.go:143] copyHostCerts
	I0425 20:23:58.521059   80072 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:23:58.521072   80072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:23:58.521148   80072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:23:58.521284   80072 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:23:58.521296   80072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:23:58.521336   80072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:23:58.521413   80072 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:23:58.521421   80072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:23:58.521443   80072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:23:58.521502   80072 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.newest-cni-366100 san=[127.0.0.1 192.168.61.209 localhost minikube newest-cni-366100]
	I0425 20:23:58.727405   80072 provision.go:177] copyRemoteCerts
	I0425 20:23:58.727461   80072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:23:58.727484   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:58.730087   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.730514   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:58.730548   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.730693   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:23:58.730894   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:58.731038   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:23:58.731239   80072 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa Username:docker}
	I0425 20:23:58.818061   80072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:23:58.851610   80072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:23:58.882604   80072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:23:58.912618   80072 provision.go:87] duration metric: took 398.726725ms to configureAuth
	I0425 20:23:58.912650   80072 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:23:58.912882   80072 config.go:182] Loaded profile config "newest-cni-366100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:23:58.912959   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:58.915457   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.915754   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:58.915786   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:58.915938   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:23:58.916153   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:58.916347   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:58.916516   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:23:58.916721   80072 main.go:141] libmachine: Using SSH client type: native
	I0425 20:23:58.916880   80072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0425 20:23:58.916899   80072 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:23:59.239023   80072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:23:59.239049   80072 machine.go:97] duration metric: took 1.098978426s to provisionDockerMachine
	I0425 20:23:59.239076   80072 start.go:293] postStartSetup for "newest-cni-366100" (driver="kvm2")
	I0425 20:23:59.239094   80072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:23:59.239112   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:59.239481   80072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:23:59.239524   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:59.242600   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:59.243010   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:59.243046   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:59.243193   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:23:59.243388   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:59.243565   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:23:59.243713   80072 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa Username:docker}
	I0425 20:23:59.336111   80072 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:23:59.341341   80072 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:23:59.341368   80072 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:23:59.341431   80072 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:23:59.341524   80072 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:23:59.341632   80072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:23:59.353185   80072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:23:59.383061   80072 start.go:296] duration metric: took 143.956534ms for postStartSetup
	I0425 20:23:59.383111   80072 fix.go:56] duration metric: took 24.65362714s for fixHost
	I0425 20:23:59.383138   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:59.386131   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:59.386490   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:59.386531   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:59.386744   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:23:59.386960   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:59.387117   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:59.387278   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:23:59.387513   80072 main.go:141] libmachine: Using SSH client type: native
	I0425 20:23:59.387683   80072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0425 20:23:59.387693   80072 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:23:59.500391   80072 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714076639.474914403
	
	I0425 20:23:59.500413   80072 fix.go:216] guest clock: 1714076639.474914403
	I0425 20:23:59.500421   80072 fix.go:229] Guest: 2024-04-25 20:23:59.474914403 +0000 UTC Remote: 2024-04-25 20:23:59.383116438 +0000 UTC m=+24.806364376 (delta=91.797965ms)
	I0425 20:23:59.500438   80072 fix.go:200] guest clock delta is within tolerance: 91.797965ms
	I0425 20:23:59.500446   80072 start.go:83] releasing machines lock for "newest-cni-366100", held for 24.770973107s
	I0425 20:23:59.500461   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:59.500697   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetIP
	I0425 20:23:59.503730   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:59.504184   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:59.504214   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:59.504338   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:59.505016   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:59.505178   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:59.505262   80072 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:23:59.505303   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:59.505415   80072 ssh_runner.go:195] Run: cat /version.json
	I0425 20:23:59.505451   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:59.508331   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:59.508678   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:59.508712   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:59.508729   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:59.508857   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:23:59.509055   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:59.509136   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:59.509165   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:59.509222   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:23:59.509403   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:23:59.509409   80072 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa Username:docker}
	I0425 20:23:59.509559   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:59.509691   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:23:59.509833   80072 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa Username:docker}
	I0425 20:23:59.593076   80072 ssh_runner.go:195] Run: systemctl --version
	I0425 20:23:59.621098   80072 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:23:59.774074   80072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:23:59.782357   80072 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:23:59.782434   80072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:23:59.802835   80072 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:23:59.802865   80072 start.go:494] detecting cgroup driver to use...
	I0425 20:23:59.802928   80072 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:23:59.827214   80072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:23:59.846250   80072 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:23:59.846313   80072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:23:59.866014   80072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:23:59.885256   80072 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:24:00.038308   80072 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:24:00.196297   80072 docker.go:233] disabling docker service ...
	I0425 20:24:00.196360   80072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:24:00.212417   80072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:24:00.228137   80072 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:24:00.384701   80072 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:24:00.521510   80072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:24:00.536706   80072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:24:00.557181   80072 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:24:00.557254   80072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:24:00.568331   80072 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:24:00.568396   80072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:24:00.580113   80072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:24:00.592367   80072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:24:00.604281   80072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:24:00.616590   80072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:24:00.628279   80072 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:24:00.651860   80072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:24:00.663990   80072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:24:00.675060   80072 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:24:00.675117   80072 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:24:00.691794   80072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:24:00.707002   80072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:24:00.846543   80072 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:24:01.008152   80072 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:24:01.008224   80072 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:24:01.014158   80072 start.go:562] Will wait 60s for crictl version
	I0425 20:24:01.014230   80072 ssh_runner.go:195] Run: which crictl
	I0425 20:24:01.018586   80072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:24:01.066480   80072 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:24:01.066576   80072 ssh_runner.go:195] Run: crio --version
	I0425 20:24:01.103350   80072 ssh_runner.go:195] Run: crio --version
	I0425 20:24:01.143280   80072 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:24:01.144943   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetIP
	I0425 20:24:01.148015   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:24:01.148496   80072 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:24:01.148532   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:24:01.148723   80072 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 20:24:01.153777   80072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:24:01.171117   80072 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.120604946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d51c4c1-292c-425c-bd07-81d1d498cd3c name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.121434121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075436968615154,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854776f370afd769520f7dd7fd2cd6f4088109b63b5404544585784fc25663c6,PodSandboxId:15ef1946510c86cd77304767a5a673cedf3b91ba715619788f50870b8dcfe5f5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075416854493311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa3cc9ba-0ade-4039-a7f9-377e809f2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 312d4fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1,PodSandboxId:09f62e29b3db9ba7ec770035e57fe6b766e952b43dc7219ebc5d8017b3f997c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075413892954632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z6ls5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef8d9f5-f623-4632-bb88-7e5c60220725,},Annotations:map[string]string{io.kubernetes.container.hash: 174bdd8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075406121225952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c,PodSandboxId:e4f5f5571a966a63e599fd628cfb69001dad1712ec1f5b5c9515012f278b7eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075406068847960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqmtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ef58b-09d4-4e88-925b-b5a
3afc68361,},Annotations:map[string]string{io.kubernetes.container.hash: 6a43d313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075,PodSandboxId:fce641181064f56cf7e95bc6d921842f082527ee6627528ec58fb8c5730ae6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075401473770392,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaac9ac173dc156b9690dc6b
e7f1916,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3,PodSandboxId:308c50030e231f0fe3ffeb1d2c8c4abc82e51179ffba4bacfd95dcee6f8ed331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075401469413711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c614667a3a1301a9dcae27075736d426,},Annotations:map[string
]string{io.kubernetes.container.hash: 19e66a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa,PodSandboxId:33759899f143a39023c021fbf27602a0ad2454a572816760590c9a4add2b1ef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075401490467231,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18075c0328297e29839df100d21ef24,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 5af9b73b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4,PodSandboxId:39ac71ee0f08bd5c9c4c81c9f1b9699c9eb750ca1624e1e92df3b584e71394f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075401423696001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5097b936fa2847d92518c82e5376e274,}
,Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d51c4c1-292c-425c-bd07-81d1d498cd3c name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.183793055Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a35f5de-e248-4134-8b4a-0fb114bb66ae name=/runtime.v1.RuntimeService/Version
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.183959757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a35f5de-e248-4134-8b4a-0fb114bb66ae name=/runtime.v1.RuntimeService/Version
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.185590991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18707064-69ae-4e38-abc9-c626ca65e161 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.186366918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076642186316793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18707064-69ae-4e38-abc9-c626ca65e161 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.187581360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06818bf5-6ea5-4418-ac93-c9e083d1ce17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.187680181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06818bf5-6ea5-4418-ac93-c9e083d1ce17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.187958060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075436968615154,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854776f370afd769520f7dd7fd2cd6f4088109b63b5404544585784fc25663c6,PodSandboxId:15ef1946510c86cd77304767a5a673cedf3b91ba715619788f50870b8dcfe5f5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075416854493311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa3cc9ba-0ade-4039-a7f9-377e809f2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 312d4fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1,PodSandboxId:09f62e29b3db9ba7ec770035e57fe6b766e952b43dc7219ebc5d8017b3f997c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075413892954632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z6ls5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef8d9f5-f623-4632-bb88-7e5c60220725,},Annotations:map[string]string{io.kubernetes.container.hash: 174bdd8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075406121225952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c,PodSandboxId:e4f5f5571a966a63e599fd628cfb69001dad1712ec1f5b5c9515012f278b7eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075406068847960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqmtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ef58b-09d4-4e88-925b-b5a
3afc68361,},Annotations:map[string]string{io.kubernetes.container.hash: 6a43d313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075,PodSandboxId:fce641181064f56cf7e95bc6d921842f082527ee6627528ec58fb8c5730ae6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075401473770392,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaac9ac173dc156b9690dc6b
e7f1916,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3,PodSandboxId:308c50030e231f0fe3ffeb1d2c8c4abc82e51179ffba4bacfd95dcee6f8ed331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075401469413711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c614667a3a1301a9dcae27075736d426,},Annotations:map[string
]string{io.kubernetes.container.hash: 19e66a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa,PodSandboxId:33759899f143a39023c021fbf27602a0ad2454a572816760590c9a4add2b1ef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075401490467231,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18075c0328297e29839df100d21ef24,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 5af9b73b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4,PodSandboxId:39ac71ee0f08bd5c9c4c81c9f1b9699c9eb750ca1624e1e92df3b584e71394f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075401423696001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5097b936fa2847d92518c82e5376e274,}
,Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06818bf5-6ea5-4418-ac93-c9e083d1ce17 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.225780104Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a10a2f2c-1db8-4cda-80c0-8be6b5c32beb name=/runtime.v1.ImageService/ListImages
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.226652750Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.0],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81 registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3],Size_:117609952,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450],Size_:112170310,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinn
ed:false,},&Image{Id:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.0],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67 registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a],Size_:63026502,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,RepoTags:[registry.k8s.io/kube-proxy:v1.30.0],RepoDigests:[registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68 registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210],Size_:85932953,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b4
80cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,RepoTags:[docker.io/kindest/kindnetd:v20240202-8f1494ea],RepoDigests:[docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988 docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac],Size_:65291810,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c
5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=a10a2f2c-1db8-4cda-80c0-8be6b5c32beb name=/runtime.v1.ImageService/ListImages
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.257662776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8578ec2-8d73-46f9-a327-957d3dca741d name=/runtime.v1.RuntimeService/Version
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.257786438Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8578ec2-8d73-46f9-a327-957d3dca741d name=/runtime.v1.RuntimeService/Version
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.259497149Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01139ace-c25f-4468-99e1-10e19f011d24 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.260019901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076642259989727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01139ace-c25f-4468-99e1-10e19f011d24 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.260931590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61e16bb8-edc2-432c-b9eb-0d24c30da2f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.261023815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61e16bb8-edc2-432c-b9eb-0d24c30da2f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.261351698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075436968615154,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854776f370afd769520f7dd7fd2cd6f4088109b63b5404544585784fc25663c6,PodSandboxId:15ef1946510c86cd77304767a5a673cedf3b91ba715619788f50870b8dcfe5f5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075416854493311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa3cc9ba-0ade-4039-a7f9-377e809f2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 312d4fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1,PodSandboxId:09f62e29b3db9ba7ec770035e57fe6b766e952b43dc7219ebc5d8017b3f997c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075413892954632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z6ls5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef8d9f5-f623-4632-bb88-7e5c60220725,},Annotations:map[string]string{io.kubernetes.container.hash: 174bdd8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075406121225952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c,PodSandboxId:e4f5f5571a966a63e599fd628cfb69001dad1712ec1f5b5c9515012f278b7eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075406068847960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqmtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ef58b-09d4-4e88-925b-b5a
3afc68361,},Annotations:map[string]string{io.kubernetes.container.hash: 6a43d313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075,PodSandboxId:fce641181064f56cf7e95bc6d921842f082527ee6627528ec58fb8c5730ae6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075401473770392,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaac9ac173dc156b9690dc6b
e7f1916,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3,PodSandboxId:308c50030e231f0fe3ffeb1d2c8c4abc82e51179ffba4bacfd95dcee6f8ed331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075401469413711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c614667a3a1301a9dcae27075736d426,},Annotations:map[string
]string{io.kubernetes.container.hash: 19e66a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa,PodSandboxId:33759899f143a39023c021fbf27602a0ad2454a572816760590c9a4add2b1ef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075401490467231,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18075c0328297e29839df100d21ef24,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 5af9b73b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4,PodSandboxId:39ac71ee0f08bd5c9c4c81c9f1b9699c9eb750ca1624e1e92df3b584e71394f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075401423696001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5097b936fa2847d92518c82e5376e274,}
,Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61e16bb8-edc2-432c-b9eb-0d24c30da2f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.310909125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12bdc014-793c-42cc-9153-ebfa95a7841c name=/runtime.v1.RuntimeService/Version
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.311042200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12bdc014-793c-42cc-9153-ebfa95a7841c name=/runtime.v1.RuntimeService/Version
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.312879287Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5f9b1a5-c1b6-4526-8f45-ac05a4090b7a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.313459794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076642313420563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5f9b1a5-c1b6-4526-8f45-ac05a4090b7a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.314516325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0e69c9e-181d-48ec-851a-a5f5dc5751b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.314573922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0e69c9e-181d-48ec-851a-a5f5dc5751b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:24:02 default-k8s-diff-port-142196 crio[729]: time="2024-04-25 20:24:02.314907985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075436968615154,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854776f370afd769520f7dd7fd2cd6f4088109b63b5404544585784fc25663c6,PodSandboxId:15ef1946510c86cd77304767a5a673cedf3b91ba715619788f50870b8dcfe5f5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075416854493311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa3cc9ba-0ade-4039-a7f9-377e809f2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 312d4fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1,PodSandboxId:09f62e29b3db9ba7ec770035e57fe6b766e952b43dc7219ebc5d8017b3f997c8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075413892954632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z6ls5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef8d9f5-f623-4632-bb88-7e5c60220725,},Annotations:map[string]string{io.kubernetes.container.hash: 174bdd8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5,PodSandboxId:66467b045e867aa91870d385d90620b4f4aaa51cf4093f664d71e3ab644e2a42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075406121225952,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 82be8699-608a-4aff-aac4-c709cba8655b,},Annotations:map[string]string{io.kubernetes.container.hash: 1a0e261,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c,PodSandboxId:e4f5f5571a966a63e599fd628cfb69001dad1712ec1f5b5c9515012f278b7eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075406068847960,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqmtp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc6ef58b-09d4-4e88-925b-b5a
3afc68361,},Annotations:map[string]string{io.kubernetes.container.hash: 6a43d313,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075,PodSandboxId:fce641181064f56cf7e95bc6d921842f082527ee6627528ec58fb8c5730ae6bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075401473770392,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eeaac9ac173dc156b9690dc6b
e7f1916,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3,PodSandboxId:308c50030e231f0fe3ffeb1d2c8c4abc82e51179ffba4bacfd95dcee6f8ed331,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075401469413711,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c614667a3a1301a9dcae27075736d426,},Annotations:map[string
]string{io.kubernetes.container.hash: 19e66a41,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa,PodSandboxId:33759899f143a39023c021fbf27602a0ad2454a572816760590c9a4add2b1ef3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075401490467231,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d18075c0328297e29839df100d21ef24,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 5af9b73b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4,PodSandboxId:39ac71ee0f08bd5c9c4c81c9f1b9699c9eb750ca1624e1e92df3b584e71394f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075401423696001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-142196,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5097b936fa2847d92518c82e5376e274,}
,Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0e69c9e-181d-48ec-851a-a5f5dc5751b3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7aef2f269df51       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   66467b045e867       storage-provisioner
	854776f370afd       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   15ef1946510c8       busybox
	2370c81d0f1fb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   09f62e29b3db9       coredns-7db6d8ff4d-z6ls5
	c1088dde2fde0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   66467b045e867       storage-provisioner
	bb19806d4c42c       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      20 minutes ago      Running             kube-proxy                1                   e4f5f5571a966       kube-proxy-bqmtp
	7c6a6c0bef83a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      20 minutes ago      Running             kube-apiserver            1                   33759899f143a       kube-apiserver-default-k8s-diff-port-142196
	a553ccfa98465       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      20 minutes ago      Running             kube-scheduler            1                   fce641181064f       kube-scheduler-default-k8s-diff-port-142196
	430ba8aceb30f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   308c50030e231       etcd-default-k8s-diff-port-142196
	ae2f5c52c77d7       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      20 minutes ago      Running             kube-controller-manager   1                   39ac71ee0f08b       kube-controller-manager-default-k8s-diff-port-142196
	
	
	==> coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35943 - 62266 "HINFO IN 7043630354879609154.1372615921858047967. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017474524s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-142196
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-142196
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=default-k8s-diff-port-142196
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T19_55_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:55:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-142196
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 20:23:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 20:19:14 +0000   Thu, 25 Apr 2024 19:55:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 20:19:14 +0000   Thu, 25 Apr 2024 19:55:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 20:19:14 +0000   Thu, 25 Apr 2024 19:55:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 20:19:14 +0000   Thu, 25 Apr 2024 20:03:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    default-k8s-diff-port-142196
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ad1f8fba81d4105a156fc610cbd8b0b
	  System UUID:                6ad1f8fb-a81d-4105-a156-fc610cbd8b0b
	  Boot ID:                    6256b908-1be9-403b-b416-d8693fb50908
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-z6ls5                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-142196                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-default-k8s-diff-port-142196             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-142196    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-bqmtp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-142196             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-569cc877fc-cphk6                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node default-k8s-diff-port-142196 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-142196 event: Registered Node default-k8s-diff-port-142196 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-142196 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-142196 event: Registered Node default-k8s-diff-port-142196 in Controller
	
	
	==> dmesg <==
	[Apr25 20:02] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052982] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043827] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.686276] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Apr25 20:03] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.612499] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.105808] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.059450] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072278] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.229550] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.148093] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[  +0.353224] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +5.485161] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.080522] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.308265] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +5.598090] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.632799] systemd-fstab-generator[1557]: Ignoring "noauto" option for root device
	[  +2.142993] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.141446] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] <==
	{"level":"info","ts":"2024-04-25T20:03:41.099721Z","caller":"traceutil/trace.go:171","msg":"trace[1210898283] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"125.206764ms","start":"2024-04-25T20:03:40.974501Z","end":"2024-04-25T20:03:41.099707Z","steps":["trace[1210898283] 'process raft request'  (duration: 124.505489ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T20:04:02.20059Z","caller":"traceutil/trace.go:171","msg":"trace[932700225] transaction","detail":"{read_only:false; response_revision:583; number_of_response:1; }","duration":"461.728259ms","start":"2024-04-25T20:04:01.73884Z","end":"2024-04-25T20:04:02.200569Z","steps":["trace[932700225] 'process raft request'  (duration: 461.472322ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T20:04:02.200786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:04:01.738826Z","time spent":"461.873827ms","remote":"127.0.0.1:36054","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":833,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a4098558\" mod_revision:537 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a4098558\" value_size:738 lease:6421727447003126453 >> failure:<request_range:<key:\"/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a4098558\" > >"}
	{"level":"warn","ts":"2024-04-25T20:04:02.545033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.69523ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6421727447003126827 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" mod_revision:570 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" value_size:4212 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-25T20:04:02.545181Z","caller":"traceutil/trace.go:171","msg":"trace[1603849765] linearizableReadLoop","detail":"{readStateIndex:627; appliedIndex:626; }","duration":"678.629338ms","start":"2024-04-25T20:04:01.866538Z","end":"2024-04-25T20:04:02.545168Z","steps":["trace[1603849765] 'read index received'  (duration: 334.53717ms)","trace[1603849765] 'applied index is now lower than readState.Index'  (duration: 344.091257ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-25T20:04:02.54524Z","caller":"traceutil/trace.go:171","msg":"trace[222074805] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"804.130096ms","start":"2024-04-25T20:04:01.741104Z","end":"2024-04-25T20:04:02.545234Z","steps":["trace[222074805] 'process raft request'  (duration: 686.094133ms)","trace[222074805] 'compare'  (duration: 117.612416ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-25T20:04:02.545289Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:04:01.741089Z","time spent":"804.16781ms","remote":"127.0.0.1:36142","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4278,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" mod_revision:570 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" value_size:4212 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" > >"}
	{"level":"warn","ts":"2024-04-25T20:04:02.545403Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"678.863389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" ","response":"range_response_count:1 size:4293"}
	{"level":"info","ts":"2024-04-25T20:04:02.545446Z","caller":"traceutil/trace.go:171","msg":"trace[1463483784] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-cphk6; range_end:; response_count:1; response_revision:584; }","duration":"678.924592ms","start":"2024-04-25T20:04:01.866515Z","end":"2024-04-25T20:04:02.54544Z","steps":["trace[1463483784] 'agreement among raft nodes before linearized reading'  (duration: 678.864971ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T20:04:02.545469Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:04:01.866502Z","time spent":"678.961891ms","remote":"127.0.0.1:36142","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4316,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-cphk6\" "}
	{"level":"warn","ts":"2024-04-25T20:04:02.545592Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.28737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a409cdbd\" ","response":"range_response_count:1 size:804"}
	{"level":"info","ts":"2024-04-25T20:04:02.545651Z","caller":"traceutil/trace.go:171","msg":"trace[453362744] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a409cdbd; range_end:; response_count:1; response_revision:584; }","duration":"341.344309ms","start":"2024-04-25T20:04:02.204297Z","end":"2024-04-25T20:04:02.545642Z","steps":["trace[453362744] 'agreement among raft nodes before linearized reading'  (duration: 341.22708ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T20:04:02.545678Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:04:02.204253Z","time spent":"341.418146ms","remote":"127.0.0.1:36054","response type":"/etcdserverpb.KV/Range","request count":0,"request size":79,"response count":1,"response size":827,"request content":"key:\"/registry/events/kube-system/metrics-server-569cc877fc-cphk6.17c99e81a409cdbd\" "}
	{"level":"warn","ts":"2024-04-25T20:04:02.545861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.206725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-25T20:04:02.54593Z","caller":"traceutil/trace.go:171","msg":"trace[1874879402] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:584; }","duration":"265.294811ms","start":"2024-04-25T20:04:02.280626Z","end":"2024-04-25T20:04:02.54592Z","steps":["trace[1874879402] 'agreement among raft nodes before linearized reading'  (duration: 265.212487ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T20:13:22.661386Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":809}
	{"level":"info","ts":"2024-04-25T20:13:22.671708Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":809,"took":"9.667298ms","hash":2325524828,"current-db-size-bytes":2568192,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2568192,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-04-25T20:13:22.67183Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2325524828,"revision":809,"compact-revision":-1}
	{"level":"info","ts":"2024-04-25T20:18:22.669433Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1051}
	{"level":"info","ts":"2024-04-25T20:18:22.674646Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1051,"took":"4.572323ms","hash":886900171,"current-db-size-bytes":2568192,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1613824,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-25T20:18:22.674728Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":886900171,"revision":1051,"compact-revision":809}
	{"level":"info","ts":"2024-04-25T20:22:54.848836Z","caller":"traceutil/trace.go:171","msg":"trace[1080469025] transaction","detail":"{read_only:false; response_revision:1514; number_of_response:1; }","duration":"117.29885ms","start":"2024-04-25T20:22:54.731486Z","end":"2024-04-25T20:22:54.848785Z","steps":["trace[1080469025] 'process raft request'  (duration: 117.180293ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-25T20:23:22.678666Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1293}
	{"level":"info","ts":"2024-04-25T20:23:22.683258Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1293,"took":"4.295171ms","hash":58858787,"current-db-size-bytes":2568192,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1601536,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-25T20:23:22.683345Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":58858787,"revision":1293,"compact-revision":1051}
	
	
	==> kernel <==
	 20:24:02 up 21 min,  0 users,  load average: 0.12, 0.14, 0.09
	Linux default-k8s-diff-port-142196 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] <==
	I0425 20:18:25.531380       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:19:25.530693       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:19:25.530806       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:19:25.530816       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:19:25.531981       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:19:25.532189       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:19:25.532251       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:21:25.531837       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:21:25.532088       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:21:25.532207       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:21:25.532355       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:21:25.532508       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:21:25.533387       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:23:24.534091       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:23:24.534600       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0425 20:23:25.535511       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:23:25.535579       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:23:25.535589       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:23:25.535632       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:23:25.535681       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:23:25.536898       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] <==
	I0425 20:18:10.332927       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:18:39.777874       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:18:40.341733       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:19:09.782554       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:19:10.349769       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0425 20:19:38.752106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="260.359µs"
	E0425 20:19:39.787787       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:19:40.356604       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0425 20:19:50.749568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="62.057µs"
	E0425 20:20:09.793490       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:20:10.366635       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:20:39.799378       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:20:40.377096       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:21:09.805858       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:21:10.384990       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:21:39.811279       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:21:40.398047       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:22:09.816018       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:22:10.406268       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:22:39.822269       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:22:40.415251       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:23:09.828284       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:23:10.425469       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:23:39.833938       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:23:40.433472       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] <==
	I0425 20:03:26.253815       1 server_linux.go:69] "Using iptables proxy"
	I0425 20:03:26.263390       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	I0425 20:03:26.311409       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 20:03:26.311539       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 20:03:26.311594       1 server_linux.go:165] "Using iptables Proxier"
	I0425 20:03:26.314655       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 20:03:26.314900       1 server.go:872] "Version info" version="v1.30.0"
	I0425 20:03:26.314960       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 20:03:26.316000       1 config.go:192] "Starting service config controller"
	I0425 20:03:26.316050       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 20:03:26.316082       1 config.go:101] "Starting endpoint slice config controller"
	I0425 20:03:26.316098       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 20:03:26.318289       1 config.go:319] "Starting node config controller"
	I0425 20:03:26.318335       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 20:03:26.417074       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 20:03:26.417271       1 shared_informer.go:320] Caches are synced for service config
	I0425 20:03:26.418760       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] <==
	I0425 20:03:22.784588       1 serving.go:380] Generated self-signed cert in-memory
	W0425 20:03:24.484484       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0425 20:03:24.484606       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 20:03:24.484618       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0425 20:03:24.484625       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0425 20:03:24.517612       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0425 20:03:24.517662       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 20:03:24.519752       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0425 20:03:24.520106       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0425 20:03:24.520228       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0425 20:03:24.520327       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0425 20:03:24.620667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 25 20:21:20 default-k8s-diff-port-142196 kubelet[948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:21:20 default-k8s-diff-port-142196 kubelet[948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:21:27 default-k8s-diff-port-142196 kubelet[948]: E0425 20:21:27.732934     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:21:38 default-k8s-diff-port-142196 kubelet[948]: E0425 20:21:38.735484     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:21:50 default-k8s-diff-port-142196 kubelet[948]: E0425 20:21:50.733225     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:22:05 default-k8s-diff-port-142196 kubelet[948]: E0425 20:22:05.734013     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:22:17 default-k8s-diff-port-142196 kubelet[948]: E0425 20:22:17.734562     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:22:20 default-k8s-diff-port-142196 kubelet[948]: E0425 20:22:20.757933     948 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:22:20 default-k8s-diff-port-142196 kubelet[948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:22:20 default-k8s-diff-port-142196 kubelet[948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:22:20 default-k8s-diff-port-142196 kubelet[948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:22:20 default-k8s-diff-port-142196 kubelet[948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:22:31 default-k8s-diff-port-142196 kubelet[948]: E0425 20:22:31.733668     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:22:42 default-k8s-diff-port-142196 kubelet[948]: E0425 20:22:42.733485     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:22:54 default-k8s-diff-port-142196 kubelet[948]: E0425 20:22:54.735383     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:23:09 default-k8s-diff-port-142196 kubelet[948]: E0425 20:23:09.732891     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:23:20 default-k8s-diff-port-142196 kubelet[948]: E0425 20:23:20.733773     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:23:20 default-k8s-diff-port-142196 kubelet[948]: E0425 20:23:20.754972     948 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:23:20 default-k8s-diff-port-142196 kubelet[948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:23:20 default-k8s-diff-port-142196 kubelet[948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:23:20 default-k8s-diff-port-142196 kubelet[948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:23:20 default-k8s-diff-port-142196 kubelet[948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:23:34 default-k8s-diff-port-142196 kubelet[948]: E0425 20:23:34.734316     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:23:48 default-k8s-diff-port-142196 kubelet[948]: E0425 20:23:48.735254     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	Apr 25 20:24:00 default-k8s-diff-port-142196 kubelet[948]: E0425 20:24:00.733711     948 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-cphk6" podUID="e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f"
	
	
	==> storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] <==
	I0425 20:03:57.108325       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0425 20:03:57.125297       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0425 20:03:57.125485       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0425 20:04:14.529603       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0425 20:04:14.533779       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"747fb1a6-d4a5-403e-811e-03c0478dbf31", APIVersion:"v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-142196_ff01e226-22c4-4e06-bfa5-18a0b24e1309 became leader
	I0425 20:04:14.534051       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-142196_ff01e226-22c4-4e06-bfa5-18a0b24e1309!
	I0425 20:04:14.636952       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-142196_ff01e226-22c4-4e06-bfa5-18a0b24e1309!
	
	
	==> storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] <==
	I0425 20:03:26.239180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0425 20:03:56.240952       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-142196 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-cphk6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-142196 describe pod metrics-server-569cc877fc-cphk6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-142196 describe pod metrics-server-569cc877fc-cphk6: exit status 1 (68.570451ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-cphk6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-142196 describe pod metrics-server-569cc877fc-cphk6: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (425.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (350.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-744552 -n no-preload-744552
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-25 20:23:26.427080124 +0000 UTC m=+6732.836014680
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-744552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-744552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.847µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-744552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744552 -n no-preload-744552
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-744552 logs -n 25
E0425 20:23:27.583035   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-744552 logs -n 25: (1.36637864s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-120641 sudo find                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo crio                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-120641                                      | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113000 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:54 UTC |
	|         | disable-driver-mounts-113000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-512173            | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-744552             | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142196  | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210442        | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-512173                 | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-744552                  | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142196       | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:07 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210442             | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 20:22 UTC | 25 Apr 24 20:22 UTC |
	| start   | -p newest-cni-366100 --memory=2200 --alsologtostderr   | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:22 UTC | 25 Apr 24 20:23 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-366100             | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC | 25 Apr 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-366100                                   | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 20:22:22
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 20:22:22.477831   79301 out.go:291] Setting OutFile to fd 1 ...
	I0425 20:22:22.478075   79301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 20:22:22.478084   79301 out.go:304] Setting ErrFile to fd 2...
	I0425 20:22:22.478088   79301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 20:22:22.478362   79301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 20:22:22.478982   79301 out.go:298] Setting JSON to false
	I0425 20:22:22.480012   79301 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7488,"bootTime":1714069054,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 20:22:22.480082   79301 start.go:139] virtualization: kvm guest
	I0425 20:22:22.483562   79301 out.go:177] * [newest-cni-366100] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 20:22:22.485082   79301 notify.go:220] Checking for updates...
	I0425 20:22:22.485086   79301 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 20:22:22.486807   79301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 20:22:22.488273   79301 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:22:22.489674   79301 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 20:22:22.490982   79301 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 20:22:22.492197   79301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 20:22:22.493961   79301 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:22:22.494080   79301 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:22:22.494194   79301 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:22:22.494317   79301 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 20:22:22.531022   79301 out.go:177] * Using the kvm2 driver based on user configuration
	I0425 20:22:22.532450   79301 start.go:297] selected driver: kvm2
	I0425 20:22:22.532467   79301 start.go:901] validating driver "kvm2" against <nil>
	I0425 20:22:22.532477   79301 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 20:22:22.533115   79301 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 20:22:22.533178   79301 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 20:22:22.551183   79301 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 20:22:22.551244   79301 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0425 20:22:22.551273   79301 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0425 20:22:22.551497   79301 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0425 20:22:22.551562   79301 cni.go:84] Creating CNI manager for ""
	I0425 20:22:22.551577   79301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:22:22.551590   79301 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 20:22:22.551648   79301 start.go:340] cluster config:
	{Name:newest-cni-366100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-366100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:22:22.551736   79301 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 20:22:22.553526   79301 out.go:177] * Starting "newest-cni-366100" primary control-plane node in "newest-cni-366100" cluster
	I0425 20:22:22.554733   79301 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:22:22.554776   79301 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 20:22:22.554786   79301 cache.go:56] Caching tarball of preloaded images
	I0425 20:22:22.554871   79301 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 20:22:22.554881   79301 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 20:22:22.554986   79301 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/config.json ...
	I0425 20:22:22.555011   79301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/config.json: {Name:mk4069052896cf29dde945427d26ef90d6c394f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:22:22.555170   79301 start.go:360] acquireMachinesLock for newest-cni-366100: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 20:22:22.555216   79301 start.go:364] duration metric: took 26.877µs to acquireMachinesLock for "newest-cni-366100"
	I0425 20:22:22.555236   79301 start.go:93] Provisioning new machine with config: &{Name:newest-cni-366100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:newest-cni-366100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:22:22.555324   79301 start.go:125] createHost starting for "" (driver="kvm2")
	I0425 20:22:22.557050   79301 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0425 20:22:22.557201   79301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:22:22.557234   79301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:22:22.572090   79301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35649
	I0425 20:22:22.572573   79301 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:22:22.573160   79301 main.go:141] libmachine: Using API Version  1
	I0425 20:22:22.573180   79301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:22:22.573591   79301 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:22:22.573834   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetMachineName
	I0425 20:22:22.574030   79301 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:22:22.574272   79301 start.go:159] libmachine.API.Create for "newest-cni-366100" (driver="kvm2")
	I0425 20:22:22.574308   79301 client.go:168] LocalClient.Create starting
	I0425 20:22:22.574354   79301 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem
	I0425 20:22:22.574397   79301 main.go:141] libmachine: Decoding PEM data...
	I0425 20:22:22.574421   79301 main.go:141] libmachine: Parsing certificate...
	I0425 20:22:22.574500   79301 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem
	I0425 20:22:22.574526   79301 main.go:141] libmachine: Decoding PEM data...
	I0425 20:22:22.574541   79301 main.go:141] libmachine: Parsing certificate...
	I0425 20:22:22.574564   79301 main.go:141] libmachine: Running pre-create checks...
	I0425 20:22:22.574577   79301 main.go:141] libmachine: (newest-cni-366100) Calling .PreCreateCheck
	I0425 20:22:22.574930   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetConfigRaw
	I0425 20:22:22.575355   79301 main.go:141] libmachine: Creating machine...
	I0425 20:22:22.575375   79301 main.go:141] libmachine: (newest-cni-366100) Calling .Create
	I0425 20:22:22.575499   79301 main.go:141] libmachine: (newest-cni-366100) Creating KVM machine...
	I0425 20:22:22.576696   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found existing default KVM network
	I0425 20:22:22.578018   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:22.577886   79324 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:23:c7:12} reservation:<nil>}
	I0425 20:22:22.578881   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:22.578811   79324 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:91:72:9f} reservation:<nil>}
	I0425 20:22:22.580017   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:22.579942   79324 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a5000}
	I0425 20:22:22.580043   79301 main.go:141] libmachine: (newest-cni-366100) DBG | created network xml: 
	I0425 20:22:22.580054   79301 main.go:141] libmachine: (newest-cni-366100) DBG | <network>
	I0425 20:22:22.580067   79301 main.go:141] libmachine: (newest-cni-366100) DBG |   <name>mk-newest-cni-366100</name>
	I0425 20:22:22.580078   79301 main.go:141] libmachine: (newest-cni-366100) DBG |   <dns enable='no'/>
	I0425 20:22:22.580087   79301 main.go:141] libmachine: (newest-cni-366100) DBG |   
	I0425 20:22:22.580098   79301 main.go:141] libmachine: (newest-cni-366100) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0425 20:22:22.580113   79301 main.go:141] libmachine: (newest-cni-366100) DBG |     <dhcp>
	I0425 20:22:22.580127   79301 main.go:141] libmachine: (newest-cni-366100) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0425 20:22:22.580140   79301 main.go:141] libmachine: (newest-cni-366100) DBG |     </dhcp>
	I0425 20:22:22.580145   79301 main.go:141] libmachine: (newest-cni-366100) DBG |   </ip>
	I0425 20:22:22.580150   79301 main.go:141] libmachine: (newest-cni-366100) DBG |   
	I0425 20:22:22.580155   79301 main.go:141] libmachine: (newest-cni-366100) DBG | </network>
	I0425 20:22:22.580159   79301 main.go:141] libmachine: (newest-cni-366100) DBG | 
	I0425 20:22:22.585559   79301 main.go:141] libmachine: (newest-cni-366100) DBG | trying to create private KVM network mk-newest-cni-366100 192.168.61.0/24...
	I0425 20:22:22.659361   79301 main.go:141] libmachine: (newest-cni-366100) DBG | private KVM network mk-newest-cni-366100 192.168.61.0/24 created
	I0425 20:22:22.659393   79301 main.go:141] libmachine: (newest-cni-366100) Setting up store path in /home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100 ...
	I0425 20:22:22.659412   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:22.659333   79324 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 20:22:22.659448   79301 main.go:141] libmachine: (newest-cni-366100) Building disk image from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 20:22:22.659473   79301 main.go:141] libmachine: (newest-cni-366100) Downloading /home/jenkins/minikube-integration/18757-6355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0425 20:22:22.891009   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:22.890873   79324 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa...
	I0425 20:22:22.986250   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:22.986113   79324 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/newest-cni-366100.rawdisk...
	I0425 20:22:22.986298   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Writing magic tar header
	I0425 20:22:22.986317   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Writing SSH key tar header
	I0425 20:22:22.986331   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:22.986251   79324 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100 ...
	I0425 20:22:22.986463   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100
	I0425 20:22:22.986486   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube/machines
	I0425 20:22:22.986494   79301 main.go:141] libmachine: (newest-cni-366100) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100 (perms=drwx------)
	I0425 20:22:22.986507   79301 main.go:141] libmachine: (newest-cni-366100) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube/machines (perms=drwxr-xr-x)
	I0425 20:22:22.986522   79301 main.go:141] libmachine: (newest-cni-366100) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355/.minikube (perms=drwxr-xr-x)
	I0425 20:22:22.986534   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 20:22:22.986551   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18757-6355
	I0425 20:22:22.986564   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0425 20:22:22.986575   79301 main.go:141] libmachine: (newest-cni-366100) Setting executable bit set on /home/jenkins/minikube-integration/18757-6355 (perms=drwxrwxr-x)
	I0425 20:22:22.986590   79301 main.go:141] libmachine: (newest-cni-366100) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0425 20:22:22.986604   79301 main.go:141] libmachine: (newest-cni-366100) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0425 20:22:22.986618   79301 main.go:141] libmachine: (newest-cni-366100) Creating domain...
	I0425 20:22:22.986632   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Checking permissions on dir: /home/jenkins
	I0425 20:22:22.986645   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Checking permissions on dir: /home
	I0425 20:22:22.986656   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Skipping /home - not owner
	I0425 20:22:22.987770   79301 main.go:141] libmachine: (newest-cni-366100) define libvirt domain using xml: 
	I0425 20:22:22.987799   79301 main.go:141] libmachine: (newest-cni-366100) <domain type='kvm'>
	I0425 20:22:22.987811   79301 main.go:141] libmachine: (newest-cni-366100)   <name>newest-cni-366100</name>
	I0425 20:22:22.987822   79301 main.go:141] libmachine: (newest-cni-366100)   <memory unit='MiB'>2200</memory>
	I0425 20:22:22.987827   79301 main.go:141] libmachine: (newest-cni-366100)   <vcpu>2</vcpu>
	I0425 20:22:22.987831   79301 main.go:141] libmachine: (newest-cni-366100)   <features>
	I0425 20:22:22.987836   79301 main.go:141] libmachine: (newest-cni-366100)     <acpi/>
	I0425 20:22:22.987842   79301 main.go:141] libmachine: (newest-cni-366100)     <apic/>
	I0425 20:22:22.987847   79301 main.go:141] libmachine: (newest-cni-366100)     <pae/>
	I0425 20:22:22.987851   79301 main.go:141] libmachine: (newest-cni-366100)     
	I0425 20:22:22.987856   79301 main.go:141] libmachine: (newest-cni-366100)   </features>
	I0425 20:22:22.987866   79301 main.go:141] libmachine: (newest-cni-366100)   <cpu mode='host-passthrough'>
	I0425 20:22:22.987874   79301 main.go:141] libmachine: (newest-cni-366100)   
	I0425 20:22:22.987884   79301 main.go:141] libmachine: (newest-cni-366100)   </cpu>
	I0425 20:22:22.987892   79301 main.go:141] libmachine: (newest-cni-366100)   <os>
	I0425 20:22:22.987903   79301 main.go:141] libmachine: (newest-cni-366100)     <type>hvm</type>
	I0425 20:22:22.987917   79301 main.go:141] libmachine: (newest-cni-366100)     <boot dev='cdrom'/>
	I0425 20:22:22.987925   79301 main.go:141] libmachine: (newest-cni-366100)     <boot dev='hd'/>
	I0425 20:22:22.987933   79301 main.go:141] libmachine: (newest-cni-366100)     <bootmenu enable='no'/>
	I0425 20:22:22.987941   79301 main.go:141] libmachine: (newest-cni-366100)   </os>
	I0425 20:22:22.987947   79301 main.go:141] libmachine: (newest-cni-366100)   <devices>
	I0425 20:22:22.987956   79301 main.go:141] libmachine: (newest-cni-366100)     <disk type='file' device='cdrom'>
	I0425 20:22:22.987972   79301 main.go:141] libmachine: (newest-cni-366100)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/boot2docker.iso'/>
	I0425 20:22:22.987985   79301 main.go:141] libmachine: (newest-cni-366100)       <target dev='hdc' bus='scsi'/>
	I0425 20:22:22.988000   79301 main.go:141] libmachine: (newest-cni-366100)       <readonly/>
	I0425 20:22:22.988011   79301 main.go:141] libmachine: (newest-cni-366100)     </disk>
	I0425 20:22:22.988023   79301 main.go:141] libmachine: (newest-cni-366100)     <disk type='file' device='disk'>
	I0425 20:22:22.988037   79301 main.go:141] libmachine: (newest-cni-366100)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0425 20:22:22.988049   79301 main.go:141] libmachine: (newest-cni-366100)       <source file='/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/newest-cni-366100.rawdisk'/>
	I0425 20:22:22.988059   79301 main.go:141] libmachine: (newest-cni-366100)       <target dev='hda' bus='virtio'/>
	I0425 20:22:22.988066   79301 main.go:141] libmachine: (newest-cni-366100)     </disk>
	I0425 20:22:22.988101   79301 main.go:141] libmachine: (newest-cni-366100)     <interface type='network'>
	I0425 20:22:22.988152   79301 main.go:141] libmachine: (newest-cni-366100)       <source network='mk-newest-cni-366100'/>
	I0425 20:22:22.988170   79301 main.go:141] libmachine: (newest-cni-366100)       <model type='virtio'/>
	I0425 20:22:22.988180   79301 main.go:141] libmachine: (newest-cni-366100)     </interface>
	I0425 20:22:22.988190   79301 main.go:141] libmachine: (newest-cni-366100)     <interface type='network'>
	I0425 20:22:22.988211   79301 main.go:141] libmachine: (newest-cni-366100)       <source network='default'/>
	I0425 20:22:22.988226   79301 main.go:141] libmachine: (newest-cni-366100)       <model type='virtio'/>
	I0425 20:22:22.988239   79301 main.go:141] libmachine: (newest-cni-366100)     </interface>
	I0425 20:22:22.988249   79301 main.go:141] libmachine: (newest-cni-366100)     <serial type='pty'>
	I0425 20:22:22.988256   79301 main.go:141] libmachine: (newest-cni-366100)       <target port='0'/>
	I0425 20:22:22.988264   79301 main.go:141] libmachine: (newest-cni-366100)     </serial>
	I0425 20:22:22.988270   79301 main.go:141] libmachine: (newest-cni-366100)     <console type='pty'>
	I0425 20:22:22.988277   79301 main.go:141] libmachine: (newest-cni-366100)       <target type='serial' port='0'/>
	I0425 20:22:22.988293   79301 main.go:141] libmachine: (newest-cni-366100)     </console>
	I0425 20:22:22.988308   79301 main.go:141] libmachine: (newest-cni-366100)     <rng model='virtio'>
	I0425 20:22:22.988322   79301 main.go:141] libmachine: (newest-cni-366100)       <backend model='random'>/dev/random</backend>
	I0425 20:22:22.988332   79301 main.go:141] libmachine: (newest-cni-366100)     </rng>
	I0425 20:22:22.988344   79301 main.go:141] libmachine: (newest-cni-366100)     
	I0425 20:22:22.988363   79301 main.go:141] libmachine: (newest-cni-366100)     
	I0425 20:22:22.988372   79301 main.go:141] libmachine: (newest-cni-366100)   </devices>
	I0425 20:22:22.988382   79301 main.go:141] libmachine: (newest-cni-366100) </domain>
	I0425 20:22:22.988392   79301 main.go:141] libmachine: (newest-cni-366100) 
	I0425 20:22:22.992757   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:4a:af:63 in network default
	I0425 20:22:22.994722   79301 main.go:141] libmachine: (newest-cni-366100) Ensuring networks are active...
	I0425 20:22:22.994739   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:22.995425   79301 main.go:141] libmachine: (newest-cni-366100) Ensuring network default is active
	I0425 20:22:22.995772   79301 main.go:141] libmachine: (newest-cni-366100) Ensuring network mk-newest-cni-366100 is active
	I0425 20:22:22.996414   79301 main.go:141] libmachine: (newest-cni-366100) Getting domain xml...
	I0425 20:22:22.997344   79301 main.go:141] libmachine: (newest-cni-366100) Creating domain...
	I0425 20:22:24.272697   79301 main.go:141] libmachine: (newest-cni-366100) Waiting to get IP...
	I0425 20:22:24.273659   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:24.274149   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:24.274183   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:24.274121   79324 retry.go:31] will retry after 280.107481ms: waiting for machine to come up
	I0425 20:22:24.556503   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:24.557206   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:24.557242   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:24.557136   79324 retry.go:31] will retry after 286.400005ms: waiting for machine to come up
	I0425 20:22:24.845613   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:24.846172   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:24.846198   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:24.846132   79324 retry.go:31] will retry after 356.162312ms: waiting for machine to come up
	I0425 20:22:25.204464   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:25.204988   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:25.205022   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:25.204926   79324 retry.go:31] will retry after 443.63756ms: waiting for machine to come up
	I0425 20:22:25.650525   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:25.651078   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:25.651104   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:25.651033   79324 retry.go:31] will retry after 646.431665ms: waiting for machine to come up
	I0425 20:22:26.298838   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:26.299340   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:26.299372   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:26.299292   79324 retry.go:31] will retry after 894.481694ms: waiting for machine to come up
	I0425 20:22:27.195287   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:27.195814   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:27.195847   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:27.195761   79324 retry.go:31] will retry after 1.030703094s: waiting for machine to come up
	I0425 20:22:28.228473   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:28.228921   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:28.228946   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:28.228877   79324 retry.go:31] will retry after 1.310048294s: waiting for machine to come up
	I0425 20:22:29.541350   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:29.541702   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:29.541729   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:29.541658   79324 retry.go:31] will retry after 1.419108793s: waiting for machine to come up
	I0425 20:22:30.962247   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:30.962862   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:30.962919   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:30.962796   79324 retry.go:31] will retry after 1.833863727s: waiting for machine to come up
	I0425 20:22:32.798233   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:32.798744   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:32.798778   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:32.798701   79324 retry.go:31] will retry after 2.658275798s: waiting for machine to come up
	I0425 20:22:35.459263   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:35.459763   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:35.459790   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:35.459717   79324 retry.go:31] will retry after 2.674514724s: waiting for machine to come up
	I0425 20:22:38.135774   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:38.136279   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:38.136301   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:38.136245   79324 retry.go:31] will retry after 3.93705161s: waiting for machine to come up
	I0425 20:22:42.078502   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:42.078920   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:22:42.078941   79301 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:22:42.078890   79324 retry.go:31] will retry after 4.709287804s: waiting for machine to come up
	I0425 20:22:46.791471   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:46.792014   79301 main.go:141] libmachine: (newest-cni-366100) Found IP for machine: 192.168.61.209
	I0425 20:22:46.792045   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has current primary IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:46.792055   79301 main.go:141] libmachine: (newest-cni-366100) Reserving static IP address...
	I0425 20:22:46.792475   79301 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find host DHCP lease matching {name: "newest-cni-366100", mac: "52:54:00:a7:4f:45", ip: "192.168.61.209"} in network mk-newest-cni-366100
	I0425 20:22:46.873557   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Getting to WaitForSSH function...
	I0425 20:22:46.873612   79301 main.go:141] libmachine: (newest-cni-366100) Reserved static IP address: 192.168.61.209
	I0425 20:22:46.873626   79301 main.go:141] libmachine: (newest-cni-366100) Waiting for SSH to be available...
	I0425 20:22:46.877191   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:46.877587   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:46.877617   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:46.877777   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Using SSH client type: external
	I0425 20:22:46.877805   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa (-rw-------)
	I0425 20:22:46.877835   79301 main.go:141] libmachine: (newest-cni-366100) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:22:46.877846   79301 main.go:141] libmachine: (newest-cni-366100) DBG | About to run SSH command:
	I0425 20:22:46.877867   79301 main.go:141] libmachine: (newest-cni-366100) DBG | exit 0
	I0425 20:22:47.006327   79301 main.go:141] libmachine: (newest-cni-366100) DBG | SSH cmd err, output: <nil>: 
	I0425 20:22:47.006633   79301 main.go:141] libmachine: (newest-cni-366100) KVM machine creation complete!
	I0425 20:22:47.006941   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetConfigRaw
	I0425 20:22:47.007521   79301 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:22:47.007735   79301 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:22:47.007910   79301 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0425 20:22:47.007925   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetState
	I0425 20:22:47.009184   79301 main.go:141] libmachine: Detecting operating system of created instance...
	I0425 20:22:47.009200   79301 main.go:141] libmachine: Waiting for SSH to be available...
	I0425 20:22:47.009208   79301 main.go:141] libmachine: Getting to WaitForSSH function...
	I0425 20:22:47.009217   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:22:47.011433   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.011780   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:47.011808   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.011906   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:22:47.012107   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:47.012286   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:47.012475   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:22:47.012677   79301 main.go:141] libmachine: Using SSH client type: native
	I0425 20:22:47.012841   79301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0425 20:22:47.012852   79301 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0425 20:22:47.125744   79301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:22:47.125769   79301 main.go:141] libmachine: Detecting the provisioner...
	I0425 20:22:47.125777   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:22:47.128629   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.129037   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:47.129066   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.129258   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:22:47.129485   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:47.129666   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:47.129820   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:22:47.129992   79301 main.go:141] libmachine: Using SSH client type: native
	I0425 20:22:47.130244   79301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0425 20:22:47.130259   79301 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0425 20:22:47.251742   79301 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0425 20:22:47.251849   79301 main.go:141] libmachine: found compatible host: buildroot
	I0425 20:22:47.251870   79301 main.go:141] libmachine: Provisioning with buildroot...
	I0425 20:22:47.251882   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetMachineName
	I0425 20:22:47.252149   79301 buildroot.go:166] provisioning hostname "newest-cni-366100"
	I0425 20:22:47.252175   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetMachineName
	I0425 20:22:47.252411   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:22:47.255200   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.255499   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:47.255531   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.255645   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:22:47.255892   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:47.256074   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:47.256205   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:22:47.256377   79301 main.go:141] libmachine: Using SSH client type: native
	I0425 20:22:47.256580   79301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0425 20:22:47.256594   79301 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-366100 && echo "newest-cni-366100" | sudo tee /etc/hostname
	I0425 20:22:47.389066   79301 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-366100
	
	I0425 20:22:47.389098   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:22:47.392070   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.392386   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:47.392422   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.392576   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:22:47.392783   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:47.392963   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:47.393099   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:22:47.393259   79301 main.go:141] libmachine: Using SSH client type: native
	I0425 20:22:47.393444   79301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0425 20:22:47.393460   79301 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-366100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-366100/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-366100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:22:47.521628   79301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:22:47.521658   79301 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:22:47.521689   79301 buildroot.go:174] setting up certificates
	I0425 20:22:47.521707   79301 provision.go:84] configureAuth start
	I0425 20:22:47.521729   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetMachineName
	I0425 20:22:47.522033   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetIP
	I0425 20:22:47.524801   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.525138   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:47.525168   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.525351   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:22:47.527735   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.528094   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:47.528124   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.528307   79301 provision.go:143] copyHostCerts
	I0425 20:22:47.528363   79301 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:22:47.528377   79301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:22:47.528468   79301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:22:47.528607   79301 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:22:47.528621   79301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:22:47.528659   79301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:22:47.528751   79301 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:22:47.528762   79301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:22:47.528795   79301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:22:47.528884   79301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.newest-cni-366100 san=[127.0.0.1 192.168.61.209 localhost minikube newest-cni-366100]
	I0425 20:22:47.582668   79301 provision.go:177] copyRemoteCerts
	I0425 20:22:47.582719   79301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:22:47.582744   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:22:47.585680   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.586015   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:47.586046   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.586172   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:22:47.586377   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:47.586523   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:22:47.586679   79301 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa Username:docker}
	I0425 20:22:47.678308   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:22:47.712338   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:22:47.743589   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 20:22:47.772459   79301 provision.go:87] duration metric: took 250.732038ms to configureAuth
	I0425 20:22:47.772487   79301 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:22:47.772699   79301 config.go:182] Loaded profile config "newest-cni-366100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:22:47.772784   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:22:47.775849   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.776258   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:47.776289   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:47.776476   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:22:47.776684   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:47.776878   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:47.777038   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:22:47.777215   79301 main.go:141] libmachine: Using SSH client type: native
	I0425 20:22:47.777421   79301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0425 20:22:47.777452   79301 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:22:48.088295   79301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:22:48.088332   79301 main.go:141] libmachine: Checking connection to Docker...
	I0425 20:22:48.088342   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetURL
	I0425 20:22:48.089829   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Using libvirt version 6000000
	I0425 20:22:48.092372   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.092799   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:48.092826   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.092983   79301 main.go:141] libmachine: Docker is up and running!
	I0425 20:22:48.093001   79301 main.go:141] libmachine: Reticulating splines...
	I0425 20:22:48.093007   79301 client.go:171] duration metric: took 25.518689355s to LocalClient.Create
	I0425 20:22:48.093044   79301 start.go:167] duration metric: took 25.51877184s to libmachine.API.Create "newest-cni-366100"
	I0425 20:22:48.093057   79301 start.go:293] postStartSetup for "newest-cni-366100" (driver="kvm2")
	I0425 20:22:48.093071   79301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:22:48.093093   79301 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:22:48.093363   79301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:22:48.093393   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:22:48.095744   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.096132   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:48.096171   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.096320   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:22:48.096506   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:48.096682   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:22:48.096837   79301 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa Username:docker}
	I0425 20:22:48.186940   79301 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:22:48.192371   79301 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:22:48.192399   79301 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:22:48.192460   79301 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:22:48.192544   79301 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:22:48.192647   79301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:22:48.204446   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:22:48.235030   79301 start.go:296] duration metric: took 141.960647ms for postStartSetup
	I0425 20:22:48.235070   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetConfigRaw
	I0425 20:22:48.235597   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetIP
	I0425 20:22:48.238303   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.238637   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:48.238665   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.238900   79301 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/config.json ...
	I0425 20:22:48.239087   79301 start.go:128] duration metric: took 25.683752057s to createHost
	I0425 20:22:48.239110   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:22:48.241281   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.241579   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:48.241605   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.241749   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:22:48.241938   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:48.242070   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:48.242235   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:22:48.242394   79301 main.go:141] libmachine: Using SSH client type: native
	I0425 20:22:48.242539   79301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.209 22 <nil> <nil>}
	I0425 20:22:48.242548   79301 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:22:48.359647   79301 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714076568.340981110
	
	I0425 20:22:48.359683   79301 fix.go:216] guest clock: 1714076568.340981110
	I0425 20:22:48.359697   79301 fix.go:229] Guest: 2024-04-25 20:22:48.34098111 +0000 UTC Remote: 2024-04-25 20:22:48.239100217 +0000 UTC m=+25.810968100 (delta=101.880893ms)
	I0425 20:22:48.359733   79301 fix.go:200] guest clock delta is within tolerance: 101.880893ms
	I0425 20:22:48.359747   79301 start.go:83] releasing machines lock for "newest-cni-366100", held for 25.804519718s
	I0425 20:22:48.359778   79301 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:22:48.360038   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetIP
	I0425 20:22:48.362815   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.363174   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:48.363208   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.363372   79301 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:22:48.363831   79301 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:22:48.364050   79301 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:22:48.364196   79301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:22:48.364235   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:22:48.364295   79301 ssh_runner.go:195] Run: cat /version.json
	I0425 20:22:48.364318   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:22:48.366969   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.367263   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.367306   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:48.367327   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.367535   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:22:48.367711   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:48.367788   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:48.367813   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:48.367909   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:22:48.367977   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:22:48.368112   79301 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa Username:docker}
	I0425 20:22:48.368510   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:22:48.368668   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:22:48.368823   79301 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa Username:docker}
	I0425 20:22:48.452429   79301 ssh_runner.go:195] Run: systemctl --version
	I0425 20:22:48.481329   79301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:22:48.648106   79301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:22:48.655170   79301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:22:48.655239   79301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:22:48.673100   79301 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:22:48.673121   79301 start.go:494] detecting cgroup driver to use...
	I0425 20:22:48.673185   79301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:22:48.694974   79301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:22:48.711517   79301 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:22:48.711583   79301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:22:48.727263   79301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:22:48.743515   79301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:22:48.883995   79301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:22:49.042896   79301 docker.go:233] disabling docker service ...
	I0425 20:22:49.042983   79301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:22:49.059714   79301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:22:49.075654   79301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:22:49.226513   79301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:22:49.354827   79301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:22:49.369169   79301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:22:49.391367   79301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:22:49.391438   79301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:22:49.403180   79301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:22:49.403242   79301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:22:49.414613   79301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:22:49.427976   79301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:22:49.440786   79301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:22:49.453089   79301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:22:49.465452   79301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:22:49.489547   79301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:22:49.501329   79301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:22:49.512148   79301 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:22:49.512222   79301 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:22:49.526695   79301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:22:49.538397   79301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:22:49.657547   79301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:22:49.806932   79301 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:22:49.807004   79301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:22:49.812127   79301 start.go:562] Will wait 60s for crictl version
	I0425 20:22:49.812184   79301 ssh_runner.go:195] Run: which crictl
	I0425 20:22:49.816494   79301 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:22:49.861186   79301 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:22:49.861291   79301 ssh_runner.go:195] Run: crio --version
	I0425 20:22:49.895600   79301 ssh_runner.go:195] Run: crio --version
	I0425 20:22:49.932932   79301 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:22:49.934469   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetIP
	I0425 20:22:49.937385   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:49.937769   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:22:49.937797   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:22:49.937999   79301 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 20:22:49.942835   79301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:22:49.959389   79301 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0425 20:22:49.960866   79301 kubeadm.go:877] updating cluster {Name:newest-cni-366100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:newest-cni-366100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:22:49.960997   79301 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:22:49.961085   79301 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:22:49.997841   79301 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:22:49.997958   79301 ssh_runner.go:195] Run: which lz4
	I0425 20:22:50.002727   79301 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:22:50.007641   79301 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:22:50.007674   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:22:51.734808   79301 crio.go:462] duration metric: took 1.732112376s to copy over tarball
	I0425 20:22:51.734879   79301 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:22:54.371486   79301 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.636578852s)
	I0425 20:22:54.371518   79301 crio.go:469] duration metric: took 2.63667934s to extract the tarball
	I0425 20:22:54.371528   79301 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:22:54.413559   79301 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:22:54.466910   79301 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:22:54.466931   79301 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:22:54.466944   79301 kubeadm.go:928] updating node { 192.168.61.209 8443 v1.30.0 crio true true} ...
	I0425 20:22:54.467052   79301 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-366100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:newest-cni-366100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:22:54.467119   79301 ssh_runner.go:195] Run: crio config
	I0425 20:22:54.523765   79301 cni.go:84] Creating CNI manager for ""
	I0425 20:22:54.523801   79301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:22:54.523822   79301 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0425 20:22:54.523851   79301 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.209 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-366100 NodeName:newest-cni-366100 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:22:54.524018   79301 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-366100"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:22:54.524083   79301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:22:54.538259   79301 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:22:54.538336   79301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:22:54.551486   79301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0425 20:22:54.571864   79301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:22:54.591988   79301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0425 20:22:54.613140   79301 ssh_runner.go:195] Run: grep 192.168.61.209	control-plane.minikube.internal$ /etc/hosts
	I0425 20:22:54.617787   79301 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:22:54.633361   79301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:22:54.783211   79301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:22:54.810242   79301 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100 for IP: 192.168.61.209
	I0425 20:22:54.810267   79301 certs.go:194] generating shared ca certs ...
	I0425 20:22:54.810286   79301 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:22:54.810470   79301 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:22:54.810525   79301 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:22:54.810539   79301 certs.go:256] generating profile certs ...
	I0425 20:22:54.810610   79301 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/client.key
	I0425 20:22:54.810628   79301 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/client.crt with IP's: []
	I0425 20:22:54.982899   79301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/client.crt ...
	I0425 20:22:54.982928   79301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/client.crt: {Name:mka131c6e80916058e89c48866fb41b9261e9316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:22:54.983119   79301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/client.key ...
	I0425 20:22:54.983132   79301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/client.key: {Name:mkc3934ef31692e1c7c0d809eaba6c81af569844 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:22:54.983234   79301 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.key.e4274e3e
	I0425 20:22:54.983251   79301 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.crt.e4274e3e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.209]
	I0425 20:22:55.105030   79301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.crt.e4274e3e ...
	I0425 20:22:55.105061   79301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.crt.e4274e3e: {Name:mk31f2d94c50df338f88b4b80f9f3dc9a833249d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:22:55.105215   79301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.key.e4274e3e ...
	I0425 20:22:55.105228   79301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.key.e4274e3e: {Name:mkd92ed5d9964e1dc785987f78c05d652d49407b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:22:55.105294   79301 certs.go:381] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.crt.e4274e3e -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.crt
	I0425 20:22:55.105375   79301 certs.go:385] copying /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.key.e4274e3e -> /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.key
	I0425 20:22:55.105442   79301 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/proxy-client.key
	I0425 20:22:55.105468   79301 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/proxy-client.crt with IP's: []
	I0425 20:22:55.396077   79301 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/proxy-client.crt ...
	I0425 20:22:55.396106   79301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/proxy-client.crt: {Name:mk8331860de5df59c252969902153bb0d71c8a64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:22:55.396262   79301 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/proxy-client.key ...
	I0425 20:22:55.396277   79301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/proxy-client.key: {Name:mk1f0684d1c764b5996a9b9e56fc98438ea336ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:22:55.396464   79301 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:22:55.396515   79301 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:22:55.396529   79301 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:22:55.396560   79301 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:22:55.396596   79301 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:22:55.396633   79301 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:22:55.396690   79301 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:22:55.397360   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:22:55.427825   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:22:55.458030   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:22:55.489565   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:22:55.520352   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 20:22:55.552002   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:22:55.582137   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:22:55.622167   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:22:55.655857   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:22:55.691055   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:22:55.728317   79301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:22:55.758687   79301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:22:55.779122   79301 ssh_runner.go:195] Run: openssl version
	I0425 20:22:55.786493   79301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:22:55.799894   79301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:22:55.805118   79301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:22:55.805168   79301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:22:55.811741   79301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:22:55.824194   79301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:22:55.836573   79301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:22:55.842008   79301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:22:55.842051   79301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:22:55.848586   79301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:22:55.861447   79301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:22:55.875037   79301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:22:55.880831   79301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:22:55.880884   79301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:22:55.887812   79301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:22:55.900816   79301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:22:55.905975   79301 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0425 20:22:55.906032   79301 kubeadm.go:391] StartCluster: {Name:newest-cni-366100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:newest-cni-366100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:22:55.906165   79301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:22:55.906239   79301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:22:55.954590   79301 cri.go:89] found id: ""
	I0425 20:22:55.954652   79301 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0425 20:22:55.966922   79301 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:22:55.978258   79301 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:22:55.989916   79301 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:22:55.989939   79301 kubeadm.go:156] found existing configuration files:
	
	I0425 20:22:55.989997   79301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:22:56.001776   79301 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:22:56.001852   79301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:22:56.013575   79301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:22:56.024462   79301 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:22:56.024516   79301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:22:56.035527   79301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:22:56.045942   79301 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:22:56.046010   79301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:22:56.057340   79301 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:22:56.068822   79301 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:22:56.068876   79301 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:22:56.081326   79301 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:22:56.196982   79301 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 20:22:56.197110   79301 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:22:56.356489   79301 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:22:56.356638   79301 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:22:56.356774   79301 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:22:56.658945   79301 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:22:56.925054   79301 out.go:204]   - Generating certificates and keys ...
	I0425 20:22:56.925178   79301 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:22:56.925327   79301 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:22:57.289637   79301 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0425 20:22:57.499498   79301 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0425 20:22:57.787941   79301 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0425 20:22:57.951650   79301 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0425 20:22:58.118983   79301 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0425 20:22:58.119173   79301 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-366100] and IPs [192.168.61.209 127.0.0.1 ::1]
	I0425 20:22:58.215972   79301 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0425 20:22:58.216468   79301 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-366100] and IPs [192.168.61.209 127.0.0.1 ::1]
	I0425 20:22:58.482395   79301 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0425 20:22:58.766076   79301 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0425 20:22:58.895306   79301 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0425 20:22:58.895397   79301 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:22:59.075938   79301 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:22:59.390719   79301 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 20:22:59.607736   79301 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:22:59.735486   79301 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:23:00.013209   79301 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:23:00.013781   79301 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:23:00.016695   79301 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:23:00.019684   79301 out.go:204]   - Booting up control plane ...
	I0425 20:23:00.019807   79301 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:23:00.019905   79301 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:23:00.020007   79301 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:23:00.039787   79301 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:23:00.040841   79301 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:23:00.040890   79301 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:23:00.196559   79301 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 20:23:00.196664   79301 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 20:23:01.204966   79301 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.003374646s
	I0425 20:23:01.205074   79301 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 20:23:06.202074   79301 kubeadm.go:309] [api-check] The API server is healthy after 5.002263698s
	I0425 20:23:06.218456   79301 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 20:23:06.239067   79301 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 20:23:06.280370   79301 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 20:23:06.280711   79301 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-366100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 20:23:06.298260   79301 kubeadm.go:309] [bootstrap-token] Using token: 0gh46k.jpo3raxk2hiihtwf
	I0425 20:23:06.299761   79301 out.go:204]   - Configuring RBAC rules ...
	I0425 20:23:06.299905   79301 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 20:23:06.307596   79301 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 20:23:06.316348   79301 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 20:23:06.324168   79301 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 20:23:06.329704   79301 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 20:23:06.337484   79301 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 20:23:06.611275   79301 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 20:23:07.063020   79301 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 20:23:07.608391   79301 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 20:23:07.609532   79301 kubeadm.go:309] 
	I0425 20:23:07.609641   79301 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 20:23:07.609664   79301 kubeadm.go:309] 
	I0425 20:23:07.609816   79301 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 20:23:07.609835   79301 kubeadm.go:309] 
	I0425 20:23:07.609869   79301 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 20:23:07.609961   79301 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 20:23:07.610035   79301 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 20:23:07.610045   79301 kubeadm.go:309] 
	I0425 20:23:07.610113   79301 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 20:23:07.610124   79301 kubeadm.go:309] 
	I0425 20:23:07.610201   79301 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 20:23:07.610221   79301 kubeadm.go:309] 
	I0425 20:23:07.610291   79301 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 20:23:07.610414   79301 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 20:23:07.610534   79301 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 20:23:07.610557   79301 kubeadm.go:309] 
	I0425 20:23:07.610654   79301 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 20:23:07.610778   79301 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 20:23:07.610795   79301 kubeadm.go:309] 
	I0425 20:23:07.610864   79301 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 0gh46k.jpo3raxk2hiihtwf \
	I0425 20:23:07.610984   79301 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed \
	I0425 20:23:07.611020   79301 kubeadm.go:309] 	--control-plane 
	I0425 20:23:07.611035   79301 kubeadm.go:309] 
	I0425 20:23:07.611162   79301 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 20:23:07.611172   79301 kubeadm.go:309] 
	I0425 20:23:07.611299   79301 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 0gh46k.jpo3raxk2hiihtwf \
	I0425 20:23:07.611464   79301 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed 
	I0425 20:23:07.611961   79301 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:23:07.612002   79301 cni.go:84] Creating CNI manager for ""
	I0425 20:23:07.612020   79301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:23:07.614544   79301 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:23:07.615934   79301 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:23:07.628269   79301 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:23:07.652086   79301 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:23:07.652164   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-366100 minikube.k8s.io/updated_at=2024_04_25T20_23_07_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=newest-cni-366100 minikube.k8s.io/primary=true
	I0425 20:23:07.652171   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:07.707604   79301 ops.go:34] apiserver oom_adj: -16
	I0425 20:23:07.885107   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:08.385228   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:08.885696   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:09.385821   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:09.885225   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:10.385682   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:10.886155   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:11.385642   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:11.885356   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:12.385229   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:12.885760   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:13.386241   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:13.885611   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:14.385211   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:14.886197   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:15.385249   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:15.885986   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:16.385808   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:16.885770   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:17.385466   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:17.885567   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:18.385965   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:18.885211   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:19.385802   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:19.885711   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:20.385630   79301 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:23:20.506138   79301 kubeadm.go:1107] duration metric: took 12.85402768s to wait for elevateKubeSystemPrivileges
	W0425 20:23:20.506183   79301 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 20:23:20.506199   79301 kubeadm.go:393] duration metric: took 24.600170302s to StartCluster
	I0425 20:23:20.506236   79301 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:23:20.506336   79301 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:23:20.508500   79301 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:23:20.508736   79301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0425 20:23:20.508751   79301 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:23:20.508795   79301 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-366100"
	I0425 20:23:20.508733   79301 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:23:20.508834   79301 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-366100"
	I0425 20:23:20.508834   79301 addons.go:69] Setting default-storageclass=true in profile "newest-cni-366100"
	I0425 20:23:20.508858   79301 host.go:66] Checking if "newest-cni-366100" exists ...
	I0425 20:23:20.508865   79301 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-366100"
	I0425 20:23:20.510653   79301 out.go:177] * Verifying Kubernetes components...
	I0425 20:23:20.508973   79301 config.go:182] Loaded profile config "newest-cni-366100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:23:20.509249   79301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:23:20.509282   79301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:23:20.512099   79301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:23:20.512135   79301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:23:20.512095   79301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:23:20.527457   79301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45199
	I0425 20:23:20.527941   79301 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:23:20.528476   79301 main.go:141] libmachine: Using API Version  1
	I0425 20:23:20.528501   79301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:23:20.528852   79301 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:23:20.529057   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetState
	I0425 20:23:20.532238   79301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0425 20:23:20.532694   79301 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:23:20.533154   79301 addons.go:234] Setting addon default-storageclass=true in "newest-cni-366100"
	I0425 20:23:20.533167   79301 main.go:141] libmachine: Using API Version  1
	I0425 20:23:20.533188   79301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:23:20.533191   79301 host.go:66] Checking if "newest-cni-366100" exists ...
	I0425 20:23:20.533511   79301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:23:20.533524   79301 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:23:20.533553   79301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:23:20.534099   79301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:23:20.534141   79301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:23:20.550319   79301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33301
	I0425 20:23:20.550356   79301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0425 20:23:20.550875   79301 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:23:20.550921   79301 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:23:20.551447   79301 main.go:141] libmachine: Using API Version  1
	I0425 20:23:20.551464   79301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:23:20.551811   79301 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:23:20.551980   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetState
	I0425 20:23:20.552634   79301 main.go:141] libmachine: Using API Version  1
	I0425 20:23:20.552654   79301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:23:20.553150   79301 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:23:20.553908   79301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:23:20.553939   79301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:23:20.554091   79301 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:20.556074   79301 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:23:20.557459   79301 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:23:20.557471   79301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:23:20.557484   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:20.560388   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:20.560827   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:20.560894   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:20.561191   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:23:20.561358   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:20.561480   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:23:20.561604   79301 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa Username:docker}
	I0425 20:23:20.574907   79301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38333
	I0425 20:23:20.575438   79301 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:23:20.575915   79301 main.go:141] libmachine: Using API Version  1
	I0425 20:23:20.575934   79301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:23:20.576291   79301 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:23:20.576464   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetState
	I0425 20:23:20.578276   79301 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:20.578532   79301 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:23:20.578547   79301 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:23:20.578566   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHHostname
	I0425 20:23:20.582488   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:20.582953   79301 main.go:141] libmachine: (newest-cni-366100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:4f:45", ip: ""} in network mk-newest-cni-366100: {Iface:virbr4 ExpiryTime:2024-04-25 21:22:39 +0000 UTC Type:0 Mac:52:54:00:a7:4f:45 Iaid: IPaddr:192.168.61.209 Prefix:24 Hostname:newest-cni-366100 Clientid:01:52:54:00:a7:4f:45}
	I0425 20:23:20.582982   79301 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined IP address 192.168.61.209 and MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:20.583234   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHPort
	I0425 20:23:20.583431   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHKeyPath
	I0425 20:23:20.583614   79301 main.go:141] libmachine: (newest-cni-366100) Calling .GetSSHUsername
	I0425 20:23:20.583718   79301 sshutil.go:53] new ssh client: &{IP:192.168.61.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/newest-cni-366100/id_rsa Username:docker}
	I0425 20:23:20.906543   79301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:23:20.906585   79301 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0425 20:23:20.909953   79301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:23:20.978037   79301 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:23:21.783772   79301 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0425 20:23:21.783894   79301 main.go:141] libmachine: Making call to close driver server
	I0425 20:23:21.783916   79301 main.go:141] libmachine: (newest-cni-366100) Calling .Close
	I0425 20:23:21.785153   79301 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:23:21.785229   79301 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:23:21.785531   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Closing plugin on server side
	I0425 20:23:21.785565   79301 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:23:21.785585   79301 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:23:21.785594   79301 main.go:141] libmachine: Making call to close driver server
	I0425 20:23:21.785606   79301 main.go:141] libmachine: (newest-cni-366100) Calling .Close
	I0425 20:23:21.785872   79301 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:23:21.785886   79301 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:23:21.844725   79301 main.go:141] libmachine: Making call to close driver server
	I0425 20:23:21.844749   79301 main.go:141] libmachine: (newest-cni-366100) Calling .Close
	I0425 20:23:21.845087   79301 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:23:21.845108   79301 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:23:22.298762   79301 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-366100" context rescaled to 1 replicas
	I0425 20:23:22.389867   79301 api_server.go:72] duration metric: took 1.881015149s to wait for apiserver process to appear ...
	I0425 20:23:22.389893   79301 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:23:22.389912   79301 api_server.go:253] Checking apiserver healthz at https://192.168.61.209:8443/healthz ...
	I0425 20:23:22.390119   79301 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.41202801s)
	I0425 20:23:22.390180   79301 main.go:141] libmachine: Making call to close driver server
	I0425 20:23:22.390199   79301 main.go:141] libmachine: (newest-cni-366100) Calling .Close
	I0425 20:23:22.390621   79301 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:23:22.390640   79301 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:23:22.390649   79301 main.go:141] libmachine: Making call to close driver server
	I0425 20:23:22.390671   79301 main.go:141] libmachine: (newest-cni-366100) Calling .Close
	I0425 20:23:22.392480   79301 main.go:141] libmachine: (newest-cni-366100) DBG | Closing plugin on server side
	I0425 20:23:22.392512   79301 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:23:22.392533   79301 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:23:22.394371   79301 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0425 20:23:22.395694   79301 addons.go:505] duration metric: took 1.886944137s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0425 20:23:22.402094   79301 api_server.go:279] https://192.168.61.209:8443/healthz returned 200:
	ok
	I0425 20:23:22.404913   79301 api_server.go:141] control plane version: v1.30.0
	I0425 20:23:22.404937   79301 api_server.go:131] duration metric: took 15.036334ms to wait for apiserver health ...
	I0425 20:23:22.404947   79301 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:23:22.448016   79301 system_pods.go:59] 8 kube-system pods found
	I0425 20:23:22.448048   79301 system_pods.go:61] "coredns-7db6d8ff4d-5qgrl" [8b3f1e4c-7d47-43f3-b5cd-5584b2190a17] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:23:22.448055   79301 system_pods.go:61] "coredns-7db6d8ff4d-j8lls" [32e9f50e-262d-473c-906f-4dddf37332b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:23:22.448061   79301 system_pods.go:61] "etcd-newest-cni-366100" [2ff105a2-1663-4bad-9a7f-c898b9386ae8] Running
	I0425 20:23:22.448065   79301 system_pods.go:61] "kube-apiserver-newest-cni-366100" [e33b1d67-2ce9-4d49-a39a-bd8601044c73] Running
	I0425 20:23:22.448071   79301 system_pods.go:61] "kube-controller-manager-newest-cni-366100" [fdcaff8c-3afe-4ff3-95ef-33f6c37f37c2] Running
	I0425 20:23:22.448074   79301 system_pods.go:61] "kube-proxy-jgmfs" [562df61e-452d-491d-b896-25b398d48ded] Running
	I0425 20:23:22.448078   79301 system_pods.go:61] "kube-scheduler-newest-cni-366100" [a02744c1-d48b-4ee3-a402-2ca82026775b] Running
	I0425 20:23:22.448081   79301 system_pods.go:61] "storage-provisioner" [8d7cad8f-4984-41fa-94a5-89b193dc2ef6] Pending
	I0425 20:23:22.448086   79301 system_pods.go:74] duration metric: took 43.133779ms to wait for pod list to return data ...
	I0425 20:23:22.448093   79301 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:23:22.454755   79301 default_sa.go:45] found service account: "default"
	I0425 20:23:22.454785   79301 default_sa.go:55] duration metric: took 6.68497ms for default service account to be created ...
	I0425 20:23:22.454799   79301 kubeadm.go:576] duration metric: took 1.945949819s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0425 20:23:22.454818   79301 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:23:22.459966   79301 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:23:22.459997   79301 node_conditions.go:123] node cpu capacity is 2
	I0425 20:23:22.460011   79301 node_conditions.go:105] duration metric: took 5.18719ms to run NodePressure ...
	I0425 20:23:22.460024   79301 start.go:240] waiting for startup goroutines ...
	I0425 20:23:22.460034   79301 start.go:245] waiting for cluster config update ...
	I0425 20:23:22.460049   79301 start.go:254] writing updated cluster config ...
	I0425 20:23:22.460378   79301 ssh_runner.go:195] Run: rm -f paused
	I0425 20:23:22.516091   79301 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:23:22.517995   79301 out.go:177] * Done! kubectl is now configured to use "newest-cni-366100" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.153639619Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed31a81d-5eb9-4264-a874-622f2094492d name=/runtime.v1.RuntimeService/Version
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.155176357Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=424b28ea-637e-4802-9399-a6e154730ab7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.156068021Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:39b55182ea9b9b9511d89190e753e0dcbacdd59e1609c3c3c5acbccb3b80bb66,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-2mxxt,Uid:44599c42-87cd-44ff-9377-fd52993919f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714075709505728933,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxxt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44599c42-87cd-44ff-9377-fd52993919f6,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T20:08:27.694481011Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2e920db1f861f5161dd3eaf69ba95be9ed1eaa121acd8414ed1b9d2347affe6f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xdl2d,Uid:4f11bf4f-f370-4957-95a1-773d255d227b,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714075709492485242,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdl2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f11bf4f-f370-4957-95a1-773d255d227b,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T20:08:27.685426585Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f21de6ec3c4821d8547aac23df6830468ef37b916bbfeaa2a8d642cd01f881f6,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-zpj9f,Uid:49e3f66c-0633-497b-81c9-2d68f1eeb45f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714075708974697185,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-zpj9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49e3f66c-0633-497b-81c9-2d68f1eeb45f,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2024-04-25T20:08:28.661500461Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33348938e34cfc8db6dd875de4fdec025925b7a31657ce148254ce89ebae9eca,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1960de28-d946-4cfb-99fd-dd89fd7f6e67,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714075708849240615,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1960de28-d946-4cfb-99fd-dd89fd7f6e67,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[
{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-25T20:08:28.535756504Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a361913ed2fb8ee1bcad23cf095cbb3983f37812a3222b5ca86f6d5848f3c615,Metadata:&PodSandboxMetadata{Name:kube-proxy-22w7x,Uid:82dda9cd-3cf5-4fdd-b4b6-f88e0360f513,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714075708822927384,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-22w7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dda9cd-3cf5-4fdd-b4b6-f88e0360f513,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-25T20:08:27.606934988Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7cdef08fe249bf9417509aef7b10bbc9536e2fe03517e1684c4d6f66c3191ef4,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-744552,Uid:80048aa3ed845c1d63441fe380468533,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714075688100856235,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80048aa3ed845c1d63441fe380468533,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.142:2379,kubernetes.io/config.hash: 80048aa3ed845c1d63441fe380468533,kubernetes.io/config.seen: 2024-04-25T20:08:07.612592348Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8dda842c6e476549593da9aaf1c47aa24817c668b78f816e5c20239ecab56b7b,Met
adata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-744552,Uid:a480a53c7855225626492dfd8c653ea3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714075688097841919,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a480a53c7855225626492dfd8c653ea3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a480a53c7855225626492dfd8c653ea3,kubernetes.io/config.seen: 2024-04-25T20:08:07.612587944Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:742c3330a0a89d0dd9ef08ebac6b4b284024139c3dce81ec2bf9994ab0402882,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-744552,Uid:747e3598f2fa1ffc2618ff97b0571488,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714075688095710119,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kube-apiserver-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 747e3598f2fa1ffc2618ff97b0571488,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.142:8443,kubernetes.io/config.hash: 747e3598f2fa1ffc2618ff97b0571488,kubernetes.io/config.seen: 2024-04-25T20:08:07.612593660Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:85d4601c24196f706b84b44e7e24a48f53e20aa45629b1291a23ecd091b7a940,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-744552,Uid:b282287dd65b57af6e5aa6ec38640dd2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714075688087985678,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b282287dd65b57af6e5aa6ec38640dd2,tier: control-plane,},Annotations:map[string]strin
g{kubernetes.io/config.hash: b282287dd65b57af6e5aa6ec38640dd2,kubernetes.io/config.seen: 2024-04-25T20:08:07.612595224Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=424b28ea-637e-4802-9399-a6e154730ab7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.162422467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fda8d7bd-9d12-499a-a69a-97594a56de86 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.162588510Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fda8d7bd-9d12-499a-a69a-97594a56de86 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.162938340Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f60cda47620ae0c7640d7f7b8531567b0f12ca7f4be1b5ae77939138e3bfce0c,PodSandboxId:39b55182ea9b9b9511d89190e753e0dcbacdd59e1609c3c3c5acbccb3b80bb66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709956145126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxxt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44599c42-87cd-44ff-9377-fd52993919f6,},Annotations:map[string]string{io.kubernetes.container.hash: 8edb01ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35dd66e9dd75e198b6914d960aa13c56c30942fed1b2eab52fa6f605277304a1,PodSandboxId:2e920db1f861f5161dd3eaf69ba95be9ed1eaa121acd8414ed1b9d2347affe6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709907761921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdl2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f11bf4f-f370-4957-95a1-773d255d227b,},Annotations:map[string]string{io.kubernetes.container.hash: dcf79dd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d7cefd108b725836b24bb8c42f8a398b76c5c5d495b7ea5d653ac67a685582,PodSandboxId:a361913ed2fb8ee1bcad23cf095cbb3983f37812a3222b5ca86f6d5848f3c615,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714075709104778853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22w7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dda9cd-3cf5-4fdd-b4b6-f88e0360f513,},Annotations:map[string]string{io.kubernetes.container.hash: a4be3b58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280170b99deab6385c954bfdfe114fe4be6b971d8ec047921e7c36e2c62323b,PodSandboxId:33348938e34cfc8db6dd875de4fdec025925b7a31657ce148254ce89ebae9eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171407570906
3034608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1960de28-d946-4cfb-99fd-dd89fd7f6e67,},Annotations:map[string]string{io.kubernetes.container.hash: ccd0a75c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d760a6bfe9ed83526ada5d764e3d80fab995270530df7bb4df4733e3fe72bdfb,PodSandboxId:8dda842c6e476549593da9aaf1c47aa24817c668b78f816e5c20239ecab56b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075688424967058,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a480a53c7855225626492dfd8c653ea3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eb87cf504ed4707126c6b0f5e37b36ebfc7801efd594d7450f75ae6d82c303,PodSandboxId:742c3330a0a89d0dd9ef08ebac6b4b284024139c3dce81ec2bf9994ab0402882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075688392667005,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 747e3598f2fa1ffc2618ff97b0571488,},Annotations:map[string]string{io.kubernetes.container.hash: 829b1439,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d02c3f6172773ca8b2e3501f4c826c0d7e90c1d1b9df69650fc9b06fbfc1e09,PodSandboxId:85d4601c24196f706b84b44e7e24a48f53e20aa45629b1291a23ecd091b7a940,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075688316745642,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b282287dd65b57af6e5aa6ec38640dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7405b686b29b061435baf005e384e6b2c3cfdb12bf75325a1723414682df0f,PodSandboxId:7cdef08fe249bf9417509aef7b10bbc9536e2fe03517e1684c4d6f66c3191ef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075688323330202,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80048aa3ed845c1d63441fe380468533,},Annotations:map[string]string{io.kubernetes.container.hash: a6e99913,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fda8d7bd-9d12-499a-a69a-97594a56de86 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.164033186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=033c6d4b-40f9-4be1-9fe8-bf8cc7c17334 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.164514969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076607164495717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=033c6d4b-40f9-4be1-9fe8-bf8cc7c17334 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.165456464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96ffca2e-792c-4bb5-87b3-77328697d3e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.165652718Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96ffca2e-792c-4bb5-87b3-77328697d3e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.165983776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f60cda47620ae0c7640d7f7b8531567b0f12ca7f4be1b5ae77939138e3bfce0c,PodSandboxId:39b55182ea9b9b9511d89190e753e0dcbacdd59e1609c3c3c5acbccb3b80bb66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709956145126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxxt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44599c42-87cd-44ff-9377-fd52993919f6,},Annotations:map[string]string{io.kubernetes.container.hash: 8edb01ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35dd66e9dd75e198b6914d960aa13c56c30942fed1b2eab52fa6f605277304a1,PodSandboxId:2e920db1f861f5161dd3eaf69ba95be9ed1eaa121acd8414ed1b9d2347affe6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709907761921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdl2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f11bf4f-f370-4957-95a1-773d255d227b,},Annotations:map[string]string{io.kubernetes.container.hash: dcf79dd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d7cefd108b725836b24bb8c42f8a398b76c5c5d495b7ea5d653ac67a685582,PodSandboxId:a361913ed2fb8ee1bcad23cf095cbb3983f37812a3222b5ca86f6d5848f3c615,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714075709104778853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22w7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dda9cd-3cf5-4fdd-b4b6-f88e0360f513,},Annotations:map[string]string{io.kubernetes.container.hash: a4be3b58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280170b99deab6385c954bfdfe114fe4be6b971d8ec047921e7c36e2c62323b,PodSandboxId:33348938e34cfc8db6dd875de4fdec025925b7a31657ce148254ce89ebae9eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171407570906
3034608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1960de28-d946-4cfb-99fd-dd89fd7f6e67,},Annotations:map[string]string{io.kubernetes.container.hash: ccd0a75c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d760a6bfe9ed83526ada5d764e3d80fab995270530df7bb4df4733e3fe72bdfb,PodSandboxId:8dda842c6e476549593da9aaf1c47aa24817c668b78f816e5c20239ecab56b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075688424967058,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a480a53c7855225626492dfd8c653ea3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eb87cf504ed4707126c6b0f5e37b36ebfc7801efd594d7450f75ae6d82c303,PodSandboxId:742c3330a0a89d0dd9ef08ebac6b4b284024139c3dce81ec2bf9994ab0402882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075688392667005,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 747e3598f2fa1ffc2618ff97b0571488,},Annotations:map[string]string{io.kubernetes.container.hash: 829b1439,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d02c3f6172773ca8b2e3501f4c826c0d7e90c1d1b9df69650fc9b06fbfc1e09,PodSandboxId:85d4601c24196f706b84b44e7e24a48f53e20aa45629b1291a23ecd091b7a940,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075688316745642,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b282287dd65b57af6e5aa6ec38640dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7405b686b29b061435baf005e384e6b2c3cfdb12bf75325a1723414682df0f,PodSandboxId:7cdef08fe249bf9417509aef7b10bbc9536e2fe03517e1684c4d6f66c3191ef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075688323330202,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80048aa3ed845c1d63441fe380468533,},Annotations:map[string]string{io.kubernetes.container.hash: a6e99913,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96ffca2e-792c-4bb5-87b3-77328697d3e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.213666284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9420853-553c-4d39-8395-fc92e671b57a name=/runtime.v1.RuntimeService/Version
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.213766904Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9420853-553c-4d39-8395-fc92e671b57a name=/runtime.v1.RuntimeService/Version
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.215557890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc804ba6-eff1-42f3-93c2-32ebd09680e1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.216013488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076607215986390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc804ba6-eff1-42f3-93c2-32ebd09680e1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.216955268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cf2598e-c049-449e-b94f-ac853727188e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.217005029Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cf2598e-c049-449e-b94f-ac853727188e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.217181271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f60cda47620ae0c7640d7f7b8531567b0f12ca7f4be1b5ae77939138e3bfce0c,PodSandboxId:39b55182ea9b9b9511d89190e753e0dcbacdd59e1609c3c3c5acbccb3b80bb66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709956145126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxxt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44599c42-87cd-44ff-9377-fd52993919f6,},Annotations:map[string]string{io.kubernetes.container.hash: 8edb01ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35dd66e9dd75e198b6914d960aa13c56c30942fed1b2eab52fa6f605277304a1,PodSandboxId:2e920db1f861f5161dd3eaf69ba95be9ed1eaa121acd8414ed1b9d2347affe6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709907761921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdl2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f11bf4f-f370-4957-95a1-773d255d227b,},Annotations:map[string]string{io.kubernetes.container.hash: dcf79dd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d7cefd108b725836b24bb8c42f8a398b76c5c5d495b7ea5d653ac67a685582,PodSandboxId:a361913ed2fb8ee1bcad23cf095cbb3983f37812a3222b5ca86f6d5848f3c615,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714075709104778853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22w7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dda9cd-3cf5-4fdd-b4b6-f88e0360f513,},Annotations:map[string]string{io.kubernetes.container.hash: a4be3b58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280170b99deab6385c954bfdfe114fe4be6b971d8ec047921e7c36e2c62323b,PodSandboxId:33348938e34cfc8db6dd875de4fdec025925b7a31657ce148254ce89ebae9eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171407570906
3034608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1960de28-d946-4cfb-99fd-dd89fd7f6e67,},Annotations:map[string]string{io.kubernetes.container.hash: ccd0a75c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d760a6bfe9ed83526ada5d764e3d80fab995270530df7bb4df4733e3fe72bdfb,PodSandboxId:8dda842c6e476549593da9aaf1c47aa24817c668b78f816e5c20239ecab56b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075688424967058,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a480a53c7855225626492dfd8c653ea3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eb87cf504ed4707126c6b0f5e37b36ebfc7801efd594d7450f75ae6d82c303,PodSandboxId:742c3330a0a89d0dd9ef08ebac6b4b284024139c3dce81ec2bf9994ab0402882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075688392667005,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 747e3598f2fa1ffc2618ff97b0571488,},Annotations:map[string]string{io.kubernetes.container.hash: 829b1439,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d02c3f6172773ca8b2e3501f4c826c0d7e90c1d1b9df69650fc9b06fbfc1e09,PodSandboxId:85d4601c24196f706b84b44e7e24a48f53e20aa45629b1291a23ecd091b7a940,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075688316745642,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b282287dd65b57af6e5aa6ec38640dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7405b686b29b061435baf005e384e6b2c3cfdb12bf75325a1723414682df0f,PodSandboxId:7cdef08fe249bf9417509aef7b10bbc9536e2fe03517e1684c4d6f66c3191ef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075688323330202,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80048aa3ed845c1d63441fe380468533,},Annotations:map[string]string{io.kubernetes.container.hash: a6e99913,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cf2598e-c049-449e-b94f-ac853727188e name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.267620936Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5718cf7-03f3-4ca5-9157-6cbef02a9951 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.267718475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5718cf7-03f3-4ca5-9157-6cbef02a9951 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.269836599Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66d998c6-f26d-4433-a1bd-86a2e748b1b9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.270312056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076607270280891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66d998c6-f26d-4433-a1bd-86a2e748b1b9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.271219898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adad8be4-1b64-4d2b-947e-1b1d30584677 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.271320634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adad8be4-1b64-4d2b-947e-1b1d30584677 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:27 no-preload-744552 crio[729]: time="2024-04-25 20:23:27.271798969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f60cda47620ae0c7640d7f7b8531567b0f12ca7f4be1b5ae77939138e3bfce0c,PodSandboxId:39b55182ea9b9b9511d89190e753e0dcbacdd59e1609c3c3c5acbccb3b80bb66,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709956145126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2mxxt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44599c42-87cd-44ff-9377-fd52993919f6,},Annotations:map[string]string{io.kubernetes.container.hash: 8edb01ad,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35dd66e9dd75e198b6914d960aa13c56c30942fed1b2eab52fa6f605277304a1,PodSandboxId:2e920db1f861f5161dd3eaf69ba95be9ed1eaa121acd8414ed1b9d2347affe6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075709907761921,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xdl2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f11bf4f-f370-4957-95a1-773d255d227b,},Annotations:map[string]string{io.kubernetes.container.hash: dcf79dd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39d7cefd108b725836b24bb8c42f8a398b76c5c5d495b7ea5d653ac67a685582,PodSandboxId:a361913ed2fb8ee1bcad23cf095cbb3983f37812a3222b5ca86f6d5848f3c615,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNIN
G,CreatedAt:1714075709104778853,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-22w7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82dda9cd-3cf5-4fdd-b4b6-f88e0360f513,},Annotations:map[string]string{io.kubernetes.container.hash: a4be3b58,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9280170b99deab6385c954bfdfe114fe4be6b971d8ec047921e7c36e2c62323b,PodSandboxId:33348938e34cfc8db6dd875de4fdec025925b7a31657ce148254ce89ebae9eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171407570906
3034608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1960de28-d946-4cfb-99fd-dd89fd7f6e67,},Annotations:map[string]string{io.kubernetes.container.hash: ccd0a75c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d760a6bfe9ed83526ada5d764e3d80fab995270530df7bb4df4733e3fe72bdfb,PodSandboxId:8dda842c6e476549593da9aaf1c47aa24817c668b78f816e5c20239ecab56b7b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075688424967058,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a480a53c7855225626492dfd8c653ea3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5eb87cf504ed4707126c6b0f5e37b36ebfc7801efd594d7450f75ae6d82c303,PodSandboxId:742c3330a0a89d0dd9ef08ebac6b4b284024139c3dce81ec2bf9994ab0402882,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075688392667005,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 747e3598f2fa1ffc2618ff97b0571488,},Annotations:map[string]string{io.kubernetes.container.hash: 829b1439,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d02c3f6172773ca8b2e3501f4c826c0d7e90c1d1b9df69650fc9b06fbfc1e09,PodSandboxId:85d4601c24196f706b84b44e7e24a48f53e20aa45629b1291a23ecd091b7a940,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075688316745642,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b282287dd65b57af6e5aa6ec38640dd2,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd7405b686b29b061435baf005e384e6b2c3cfdb12bf75325a1723414682df0f,PodSandboxId:7cdef08fe249bf9417509aef7b10bbc9536e2fe03517e1684c4d6f66c3191ef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075688323330202,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-744552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80048aa3ed845c1d63441fe380468533,},Annotations:map[string]string{io.kubernetes.container.hash: a6e99913,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adad8be4-1b64-4d2b-947e-1b1d30584677 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f60cda47620ae       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   39b55182ea9b9       coredns-7db6d8ff4d-2mxxt
	35dd66e9dd75e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   2e920db1f861f       coredns-7db6d8ff4d-xdl2d
	39d7cefd108b7       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   14 minutes ago      Running             kube-proxy                0                   a361913ed2fb8       kube-proxy-22w7x
	9280170b99dea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   33348938e34cf       storage-provisioner
	d760a6bfe9ed8       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   15 minutes ago      Running             kube-scheduler            2                   8dda842c6e476       kube-scheduler-no-preload-744552
	a5eb87cf504ed       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   15 minutes ago      Running             kube-apiserver            2                   742c3330a0a89       kube-apiserver-no-preload-744552
	cd7405b686b29       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   7cdef08fe249b       etcd-no-preload-744552
	0d02c3f617277       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   15 minutes ago      Running             kube-controller-manager   2                   85d4601c24196       kube-controller-manager-no-preload-744552
	
	
	==> coredns [35dd66e9dd75e198b6914d960aa13c56c30942fed1b2eab52fa6f605277304a1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f60cda47620ae0c7640d7f7b8531567b0f12ca7f4be1b5ae77939138e3bfce0c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-744552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-744552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=no-preload-744552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T20_08_15_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 20:08:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-744552
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 20:23:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 20:18:47 +0000   Thu, 25 Apr 2024 20:08:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 20:18:47 +0000   Thu, 25 Apr 2024 20:08:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 20:18:47 +0000   Thu, 25 Apr 2024 20:08:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 20:18:47 +0000   Thu, 25 Apr 2024 20:08:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.142
	  Hostname:    no-preload-744552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 38800b8f3279411fb3268a56d385002c
	  System UUID:                38800b8f-3279-411f-b326-8a56d385002c
	  Boot ID:                    30963a51-cffd-4030-bc24-715b76ee9a9f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2mxxt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-xdl2d                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-no-preload-744552                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-744552             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-744552    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-22w7x                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-no-preload-744552             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-zpj9f              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node no-preload-744552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node no-preload-744552 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node no-preload-744552 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node no-preload-744552 event: Registered Node no-preload-744552 in Controller
	
	
	==> dmesg <==
	[  +0.052182] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043472] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.630967] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.472947] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.710164] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.324199] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.055598] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066901] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.208122] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.147239] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.315615] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[Apr25 20:03] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.065708] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.313424] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +5.670561] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.582042] kauditd_printk_skb: 79 callbacks suppressed
	[Apr25 20:08] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.628851] systemd-fstab-generator[4014]: Ignoring "noauto" option for root device
	[  +4.470713] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.430215] systemd-fstab-generator[4338]: Ignoring "noauto" option for root device
	[ +13.484501] systemd-fstab-generator[4551]: Ignoring "noauto" option for root device
	[  +0.118508] kauditd_printk_skb: 14 callbacks suppressed
	[Apr25 20:09] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [cd7405b686b29b061435baf005e384e6b2c3cfdb12bf75325a1723414682df0f] <==
	{"level":"info","ts":"2024-04-25T20:08:09.1006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9be3859290e499ce became leader at term 2"}
	{"level":"info","ts":"2024-04-25T20:08:09.100608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9be3859290e499ce elected leader 9be3859290e499ce at term 2"}
	{"level":"info","ts":"2024-04-25T20:08:09.10465Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T20:08:09.108744Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9be3859290e499ce","local-member-attributes":"{Name:no-preload-744552 ClientURLs:[https://192.168.72.142:2379]}","request-path":"/0/members/9be3859290e499ce/attributes","cluster-id":"7a995cf908c9189","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T20:08:09.108925Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T20:08:09.109288Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T20:08:09.109509Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T20:08:09.109555Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T20:08:09.11728Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.142:2379"}
	{"level":"info","ts":"2024-04-25T20:08:09.122068Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-25T20:08:09.15349Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7a995cf908c9189","local-member-id":"9be3859290e499ce","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T20:08:09.153598Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T20:08:09.153625Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-25T20:18:09.543538Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":679}
	{"level":"info","ts":"2024-04-25T20:18:09.554547Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":679,"took":"10.275866ms","hash":3097078622,"current-db-size-bytes":2273280,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2273280,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-04-25T20:18:09.554646Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3097078622,"revision":679,"compact-revision":-1}
	{"level":"info","ts":"2024-04-25T20:22:56.846476Z","caller":"traceutil/trace.go:171","msg":"trace[1536278061] linearizableReadLoop","detail":"{readStateIndex:1343; appliedIndex:1342; }","duration":"377.162001ms","start":"2024-04-25T20:22:56.469254Z","end":"2024-04-25T20:22:56.846416Z","steps":["trace[1536278061] 'read index received'  (duration: 376.91001ms)","trace[1536278061] 'applied index is now lower than readState.Index'  (duration: 248.975µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-25T20:22:56.847697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"378.348907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-25T20:22:56.847772Z","caller":"traceutil/trace.go:171","msg":"trace[61714000] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1155; }","duration":"378.505748ms","start":"2024-04-25T20:22:56.469249Z","end":"2024-04-25T20:22:56.847755Z","steps":["trace[61714000] 'agreement among raft nodes before linearized reading'  (duration: 378.323501ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T20:22:56.847887Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:22:56.469215Z","time spent":"378.653549ms","remote":"127.0.0.1:50176","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"info","ts":"2024-04-25T20:22:56.847006Z","caller":"traceutil/trace.go:171","msg":"trace[1423242459] transaction","detail":"{read_only:false; response_revision:1155; number_of_response:1; }","duration":"404.424262ms","start":"2024-04-25T20:22:56.442559Z","end":"2024-04-25T20:22:56.846983Z","steps":["trace[1423242459] 'process raft request'  (duration: 403.612848ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T20:22:56.849093Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-25T20:22:56.44254Z","time spent":"405.671124ms","remote":"127.0.0.1:50156","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1154 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-04-25T20:23:09.55567Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":922}
	{"level":"info","ts":"2024-04-25T20:23:09.56048Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":922,"took":"4.470506ms","hash":1009926439,"current-db-size-bytes":2273280,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1617920,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-25T20:23:09.560549Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1009926439,"revision":922,"compact-revision":679}
	
	
	==> kernel <==
	 20:23:27 up 20 min,  0 users,  load average: 0.10, 0.15, 0.15
	Linux no-preload-744552 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a5eb87cf504ed4707126c6b0f5e37b36ebfc7801efd594d7450f75ae6d82c303] <==
	I0425 20:18:12.331179       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:19:12.330930       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:19:12.331159       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:19:12.331203       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:19:12.331326       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:19:12.331458       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:19:12.333004       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:21:12.332655       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:21:12.332934       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:21:12.332944       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:21:12.333722       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:21:12.333904       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:21:12.334004       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:23:11.334885       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:23:11.335132       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0425 20:23:12.336495       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:23:12.336658       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:23:12.336711       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:23:12.336647       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:23:12.336774       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:23:12.338004       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0d02c3f6172773ca8b2e3501f4c826c0d7e90c1d1b9df69650fc9b06fbfc1e09] <==
	I0425 20:17:57.372968       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:18:26.792643       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:18:27.382731       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:18:56.797265       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:18:57.392885       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:19:26.803252       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:19:27.402993       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0425 20:19:35.480530       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="169.154µs"
	I0425 20:19:46.479573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="79.336µs"
	E0425 20:19:56.808568       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:19:57.413147       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:20:26.820685       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:20:27.422127       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:20:56.827506       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:20:57.431316       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:21:26.832985       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:21:27.439271       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:21:56.838782       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:21:57.447879       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:22:26.846207       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:22:27.461618       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:22:56.852490       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:22:57.472552       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:23:26.861221       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:23:27.486866       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [39d7cefd108b725836b24bb8c42f8a398b76c5c5d495b7ea5d653ac67a685582] <==
	I0425 20:08:29.390429       1 server_linux.go:69] "Using iptables proxy"
	I0425 20:08:29.406480       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.142"]
	I0425 20:08:29.470274       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 20:08:29.470312       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 20:08:29.470328       1 server_linux.go:165] "Using iptables Proxier"
	I0425 20:08:29.475766       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 20:08:29.476127       1 server.go:872] "Version info" version="v1.30.0"
	I0425 20:08:29.476183       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 20:08:29.479022       1 config.go:192] "Starting service config controller"
	I0425 20:08:29.479079       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 20:08:29.479124       1 config.go:101] "Starting endpoint slice config controller"
	I0425 20:08:29.479140       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 20:08:29.479873       1 config.go:319] "Starting node config controller"
	I0425 20:08:29.481849       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 20:08:29.579899       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 20:08:29.579929       1 shared_informer.go:320] Caches are synced for service config
	I0425 20:08:29.586330       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d760a6bfe9ed83526ada5d764e3d80fab995270530df7bb4df4733e3fe72bdfb] <==
	E0425 20:08:11.345228       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0425 20:08:11.344193       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 20:08:11.345279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 20:08:11.344240       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 20:08:11.345326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 20:08:11.345491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 20:08:11.345706       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 20:08:11.345756       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 20:08:12.193677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0425 20:08:12.193727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0425 20:08:12.214268       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0425 20:08:12.214413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0425 20:08:12.255532       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0425 20:08:12.255666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0425 20:08:12.258952       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0425 20:08:12.259039       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 20:08:12.437924       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0425 20:08:12.438053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0425 20:08:12.438140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0425 20:08:12.438863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0425 20:08:12.604314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0425 20:08:12.604538       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0425 20:08:12.670764       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0425 20:08:12.670820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0425 20:08:15.437873       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 25 20:21:14 no-preload-744552 kubelet[4345]: E0425 20:21:14.502941    4345 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:21:14 no-preload-744552 kubelet[4345]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:21:14 no-preload-744552 kubelet[4345]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:21:14 no-preload-744552 kubelet[4345]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:21:14 no-preload-744552 kubelet[4345]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:21:18 no-preload-744552 kubelet[4345]: E0425 20:21:18.461318    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:21:33 no-preload-744552 kubelet[4345]: E0425 20:21:33.460637    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:21:45 no-preload-744552 kubelet[4345]: E0425 20:21:45.461601    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:21:58 no-preload-744552 kubelet[4345]: E0425 20:21:58.461008    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:22:13 no-preload-744552 kubelet[4345]: E0425 20:22:13.464020    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:22:14 no-preload-744552 kubelet[4345]: E0425 20:22:14.505329    4345 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:22:14 no-preload-744552 kubelet[4345]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:22:14 no-preload-744552 kubelet[4345]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:22:14 no-preload-744552 kubelet[4345]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:22:14 no-preload-744552 kubelet[4345]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:22:26 no-preload-744552 kubelet[4345]: E0425 20:22:26.462777    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:22:39 no-preload-744552 kubelet[4345]: E0425 20:22:39.461322    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:22:54 no-preload-744552 kubelet[4345]: E0425 20:22:54.461316    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:23:05 no-preload-744552 kubelet[4345]: E0425 20:23:05.461857    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	Apr 25 20:23:14 no-preload-744552 kubelet[4345]: E0425 20:23:14.501557    4345 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:23:14 no-preload-744552 kubelet[4345]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:23:14 no-preload-744552 kubelet[4345]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:23:14 no-preload-744552 kubelet[4345]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:23:14 no-preload-744552 kubelet[4345]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:23:19 no-preload-744552 kubelet[4345]: E0425 20:23:19.461328    4345 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-zpj9f" podUID="49e3f66c-0633-497b-81c9-2d68f1eeb45f"
	
	
	==> storage-provisioner [9280170b99deab6385c954bfdfe114fe4be6b971d8ec047921e7c36e2c62323b] <==
	I0425 20:08:29.224574       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0425 20:08:29.261335       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0425 20:08:29.261521       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0425 20:08:29.278839       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0425 20:08:29.279099       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-744552_1f08e049-89b8-4094-bce1-23bc472ee6e9!
	I0425 20:08:29.279896       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eefb9920-1470-4da9-b4fc-8c0df48631f6", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-744552_1f08e049-89b8-4094-bce1-23bc472ee6e9 became leader
	I0425 20:08:29.380482       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-744552_1f08e049-89b8-4094-bce1-23bc472ee6e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-744552 -n no-preload-744552
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-744552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-zpj9f
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-744552 describe pod metrics-server-569cc877fc-zpj9f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-744552 describe pod metrics-server-569cc877fc-zpj9f: exit status 1 (62.851907ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-zpj9f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-744552 describe pod metrics-server-569cc877fc-zpj9f: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (350.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (379.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-512173 -n embed-certs-512173
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-25 20:23:57.763986304 +0000 UTC m=+6764.172920854
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-512173 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-512173 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.862µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-512173 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512173 -n embed-certs-512173
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-512173 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-512173 logs -n 25: (1.395858516s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-113000 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:54 UTC |
	|         | disable-driver-mounts-113000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-512173            | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-744552             | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142196  | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210442        | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-512173                 | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-744552                  | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142196       | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:07 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210442             | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 20:22 UTC | 25 Apr 24 20:22 UTC |
	| start   | -p newest-cni-366100 --memory=2200 --alsologtostderr   | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:22 UTC | 25 Apr 24 20:23 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-366100             | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC | 25 Apr 24 20:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-366100                                   | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC | 25 Apr 24 20:23 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC | 25 Apr 24 20:23 UTC |
	| addons  | enable dashboard -p newest-cni-366100                  | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC | 25 Apr 24 20:23 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-366100 --memory=2200 --alsologtostderr   | newest-cni-366100            | jenkins | v1.33.0 | 25 Apr 24 20:23 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 20:23:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 20:23:34.622885   80072 out.go:291] Setting OutFile to fd 1 ...
	I0425 20:23:34.623014   80072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 20:23:34.623024   80072 out.go:304] Setting ErrFile to fd 2...
	I0425 20:23:34.623029   80072 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 20:23:34.623199   80072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 20:23:34.623718   80072 out.go:298] Setting JSON to false
	I0425 20:23:34.624605   80072 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7561,"bootTime":1714069054,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 20:23:34.624663   80072 start.go:139] virtualization: kvm guest
	I0425 20:23:34.627084   80072 out.go:177] * [newest-cni-366100] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 20:23:34.628380   80072 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 20:23:34.628457   80072 notify.go:220] Checking for updates...
	I0425 20:23:34.629591   80072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 20:23:34.630992   80072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:23:34.632190   80072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 20:23:34.633572   80072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 20:23:34.634935   80072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 20:23:34.636798   80072 config.go:182] Loaded profile config "newest-cni-366100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:23:34.637395   80072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:23:34.637480   80072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:23:34.652537   80072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I0425 20:23:34.653008   80072 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:23:34.653497   80072 main.go:141] libmachine: Using API Version  1
	I0425 20:23:34.653521   80072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:23:34.653900   80072 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:23:34.654053   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:34.654320   80072 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 20:23:34.654678   80072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:23:34.654723   80072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:23:34.669855   80072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
	I0425 20:23:34.670354   80072 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:23:34.670850   80072 main.go:141] libmachine: Using API Version  1
	I0425 20:23:34.670876   80072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:23:34.671176   80072 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:23:34.671352   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:34.708419   80072 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 20:23:34.709628   80072 start.go:297] selected driver: kvm2
	I0425 20:23:34.709642   80072 start.go:901] validating driver "kvm2" against &{Name:newest-cni-366100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:newest-cni-366100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:23:34.709765   80072 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 20:23:34.710456   80072 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 20:23:34.710517   80072 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 20:23:34.725234   80072 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 20:23:34.725701   80072 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0425 20:23:34.725784   80072 cni.go:84] Creating CNI manager for ""
	I0425 20:23:34.725802   80072 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:23:34.725853   80072 start.go:340] cluster config:
	{Name:newest-cni-366100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-366100 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.209 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:23:34.725960   80072 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 20:23:34.727617   80072 out.go:177] * Starting "newest-cni-366100" primary control-plane node in "newest-cni-366100" cluster
	I0425 20:23:34.728921   80072 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:23:34.728971   80072 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 20:23:34.728991   80072 cache.go:56] Caching tarball of preloaded images
	I0425 20:23:34.729086   80072 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 20:23:34.729106   80072 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 20:23:34.729234   80072 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/newest-cni-366100/config.json ...
	I0425 20:23:34.729426   80072 start.go:360] acquireMachinesLock for newest-cni-366100: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 20:23:34.729467   80072 start.go:364] duration metric: took 23.287µs to acquireMachinesLock for "newest-cni-366100"
	I0425 20:23:34.729478   80072 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:23:34.729482   80072 fix.go:54] fixHost starting: 
	I0425 20:23:34.729722   80072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:23:34.729752   80072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:23:34.744430   80072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43301
	I0425 20:23:34.744910   80072 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:23:34.745384   80072 main.go:141] libmachine: Using API Version  1
	I0425 20:23:34.745417   80072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:23:34.745812   80072 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:23:34.745973   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	I0425 20:23:34.746108   80072 main.go:141] libmachine: (newest-cni-366100) Calling .GetState
	I0425 20:23:34.747691   80072 fix.go:112] recreateIfNeeded on newest-cni-366100: state=Stopped err=<nil>
	I0425 20:23:34.747715   80072 main.go:141] libmachine: (newest-cni-366100) Calling .DriverName
	W0425 20:23:34.747877   80072 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:23:34.749786   80072 out.go:177] * Restarting existing kvm2 VM for "newest-cni-366100" ...
	I0425 20:23:34.751073   80072 main.go:141] libmachine: (newest-cni-366100) Calling .Start
	I0425 20:23:34.751241   80072 main.go:141] libmachine: (newest-cni-366100) Ensuring networks are active...
	I0425 20:23:34.751997   80072 main.go:141] libmachine: (newest-cni-366100) Ensuring network default is active
	I0425 20:23:34.752404   80072 main.go:141] libmachine: (newest-cni-366100) Ensuring network mk-newest-cni-366100 is active
	I0425 20:23:34.752821   80072 main.go:141] libmachine: (newest-cni-366100) Getting domain xml...
	I0425 20:23:34.753623   80072 main.go:141] libmachine: (newest-cni-366100) Creating domain...
	I0425 20:23:35.996444   80072 main.go:141] libmachine: (newest-cni-366100) Waiting to get IP...
	I0425 20:23:35.997350   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:35.997842   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:35.997913   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:35.997804   80107 retry.go:31] will retry after 234.042053ms: waiting for machine to come up
	I0425 20:23:36.233193   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:36.233797   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:36.233857   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:36.233764   80107 retry.go:31] will retry after 349.383929ms: waiting for machine to come up
	I0425 20:23:36.584361   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:36.584917   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:36.584942   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:36.584884   80107 retry.go:31] will retry after 461.234598ms: waiting for machine to come up
	I0425 20:23:37.047383   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:37.047913   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:37.047943   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:37.047866   80107 retry.go:31] will retry after 538.387751ms: waiting for machine to come up
	I0425 20:23:37.588537   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:37.588987   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:37.589022   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:37.588944   80107 retry.go:31] will retry after 608.399222ms: waiting for machine to come up
	I0425 20:23:38.198714   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:38.199154   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:38.199177   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:38.199114   80107 retry.go:31] will retry after 877.686267ms: waiting for machine to come up
	I0425 20:23:39.078130   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:39.078606   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:39.078638   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:39.078554   80107 retry.go:31] will retry after 1.065414647s: waiting for machine to come up
	I0425 20:23:40.145266   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:40.145692   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:40.145735   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:40.145660   80107 retry.go:31] will retry after 1.028159381s: waiting for machine to come up
	I0425 20:23:41.175885   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:41.176331   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:41.176359   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:41.176268   80107 retry.go:31] will retry after 1.509700207s: waiting for machine to come up
	I0425 20:23:42.687455   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:42.687838   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:42.687870   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:42.687814   80107 retry.go:31] will retry after 1.661055477s: waiting for machine to come up
	I0425 20:23:44.351305   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:44.351851   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:44.351884   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:44.351780   80107 retry.go:31] will retry after 2.061790599s: waiting for machine to come up
	I0425 20:23:46.415486   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:46.416043   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:46.416081   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:46.415980   80107 retry.go:31] will retry after 3.087288552s: waiting for machine to come up
	I0425 20:23:49.507104   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:49.507505   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:49.507528   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:49.507474   80107 retry.go:31] will retry after 2.834636598s: waiting for machine to come up
	I0425 20:23:52.343340   80072 main.go:141] libmachine: (newest-cni-366100) DBG | domain newest-cni-366100 has defined MAC address 52:54:00:a7:4f:45 in network mk-newest-cni-366100
	I0425 20:23:52.343727   80072 main.go:141] libmachine: (newest-cni-366100) DBG | unable to find current IP address of domain newest-cni-366100 in network mk-newest-cni-366100
	I0425 20:23:52.343761   80072 main.go:141] libmachine: (newest-cni-366100) DBG | I0425 20:23:52.343667   80107 retry.go:31] will retry after 5.650772362s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.489432851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076638489409292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a401f042-5597-48fd-a39f-2a4e58ae14ad name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.490292478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec40bb47-ef5f-490a-af60-3f0c6ec1d432 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.490370756Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec40bb47-ef5f-490a-af60-3f0c6ec1d432 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.490603374Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075479620869366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c451d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67ba6a4ea4e35e2b43f358a16c582b03342b863fa0cb48159052b28cb979308,PodSandboxId:135332d33750e30e406c5f99481716254aaf1e04169c75aa4f9559c6d6f27dcd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075459338479741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09c7377f-44eb-4764-97e2-b21add69ffaf,},Annotations:map[string]string{io.kubernetes.container.hash: 46eec6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0,PodSandboxId:514fb8d1dca62bb204cf622d1239158567f838553285306b019e800412cb59b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075456566393378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xsptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b974e5-9b6e-4647-81cc-4fd8aa94077c,},Annotations:map[string]string{io.kubernetes.container.hash: d5a36c9f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149,PodSandboxId:b1ddcd0c049a993aae5bdf0fbbad3dca6a34653633cb29359f94a3ade5f4b962,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075448826715075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8247p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc053d9-814c-4882-b
d11-5111e5a72635,},Annotations:map[string]string{io.kubernetes.container.hash: b4aae625,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075448820410597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c4
51d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4,PodSandboxId:d910c794e803aa51440b28e285bb1585be2f856c2ea6b3d884bd90b96287e06c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075444158751433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3836a19decee787d7cd4e27481d1676,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5,PodSandboxId:ca08ac66072f9a1e15f19674769d4b4ff7503f1c89fb800634c6bc7ec3a012af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075444088482866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c772bbb62054949d2fd93d6437431eb8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 33e1ff1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650,PodSandboxId:f20399c1b1127cc7a57a58e92e51e5fd2e3e8043e242562a57d81c3c9ca6594e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075444124813130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddeaa81ec9a2358ea082dc210cd7af0d,},Annotations:map[string]string{io.kubernetes.container.hash:
f161a577,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86,PodSandboxId:fda25866d81792a46d7118f7e7f6b3879e4e201ef7e13b4cece366dafffb67f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075444073895580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4947ab8541c12a4889282bf39fe1af10,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec40bb47-ef5f-490a-af60-3f0c6ec1d432 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.538331330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80363fec-bfb6-4930-b2e2-6c71af8896b2 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.538751998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80363fec-bfb6-4930-b2e2-6c71af8896b2 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.541622691Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=46d982a1-d265-4923-9ce9-ec7c8fae9a59 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.542548391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076638542477477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46d982a1-d265-4923-9ce9-ec7c8fae9a59 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.543462310Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04875b80-b4a4-4ba3-a033-0ea9b1b71be4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.543569248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04875b80-b4a4-4ba3-a033-0ea9b1b71be4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.543830702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075479620869366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c451d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67ba6a4ea4e35e2b43f358a16c582b03342b863fa0cb48159052b28cb979308,PodSandboxId:135332d33750e30e406c5f99481716254aaf1e04169c75aa4f9559c6d6f27dcd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075459338479741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09c7377f-44eb-4764-97e2-b21add69ffaf,},Annotations:map[string]string{io.kubernetes.container.hash: 46eec6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0,PodSandboxId:514fb8d1dca62bb204cf622d1239158567f838553285306b019e800412cb59b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075456566393378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xsptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b974e5-9b6e-4647-81cc-4fd8aa94077c,},Annotations:map[string]string{io.kubernetes.container.hash: d5a36c9f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149,PodSandboxId:b1ddcd0c049a993aae5bdf0fbbad3dca6a34653633cb29359f94a3ade5f4b962,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075448826715075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8247p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc053d9-814c-4882-b
d11-5111e5a72635,},Annotations:map[string]string{io.kubernetes.container.hash: b4aae625,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075448820410597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c4
51d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4,PodSandboxId:d910c794e803aa51440b28e285bb1585be2f856c2ea6b3d884bd90b96287e06c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075444158751433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3836a19decee787d7cd4e27481d1676,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5,PodSandboxId:ca08ac66072f9a1e15f19674769d4b4ff7503f1c89fb800634c6bc7ec3a012af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075444088482866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c772bbb62054949d2fd93d6437431eb8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 33e1ff1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650,PodSandboxId:f20399c1b1127cc7a57a58e92e51e5fd2e3e8043e242562a57d81c3c9ca6594e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075444124813130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddeaa81ec9a2358ea082dc210cd7af0d,},Annotations:map[string]string{io.kubernetes.container.hash:
f161a577,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86,PodSandboxId:fda25866d81792a46d7118f7e7f6b3879e4e201ef7e13b4cece366dafffb67f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075444073895580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4947ab8541c12a4889282bf39fe1af10,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04875b80-b4a4-4ba3-a033-0ea9b1b71be4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.593441207Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4ad1488-46c8-4a7b-8977-25289bedbc8e name=/runtime.v1.RuntimeService/Version
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.593573499Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4ad1488-46c8-4a7b-8977-25289bedbc8e name=/runtime.v1.RuntimeService/Version
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.595245544Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25fd642c-5fa1-4c23-a54e-65ba9689c7b7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.595678372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076638595630235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25fd642c-5fa1-4c23-a54e-65ba9689c7b7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.596323669Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7161f4fe-6eeb-455c-8f18-65e49824ff0b name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.596432350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7161f4fe-6eeb-455c-8f18-65e49824ff0b name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.596643262Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075479620869366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c451d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67ba6a4ea4e35e2b43f358a16c582b03342b863fa0cb48159052b28cb979308,PodSandboxId:135332d33750e30e406c5f99481716254aaf1e04169c75aa4f9559c6d6f27dcd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075459338479741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09c7377f-44eb-4764-97e2-b21add69ffaf,},Annotations:map[string]string{io.kubernetes.container.hash: 46eec6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0,PodSandboxId:514fb8d1dca62bb204cf622d1239158567f838553285306b019e800412cb59b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075456566393378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xsptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b974e5-9b6e-4647-81cc-4fd8aa94077c,},Annotations:map[string]string{io.kubernetes.container.hash: d5a36c9f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149,PodSandboxId:b1ddcd0c049a993aae5bdf0fbbad3dca6a34653633cb29359f94a3ade5f4b962,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075448826715075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8247p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc053d9-814c-4882-b
d11-5111e5a72635,},Annotations:map[string]string{io.kubernetes.container.hash: b4aae625,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075448820410597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c4
51d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4,PodSandboxId:d910c794e803aa51440b28e285bb1585be2f856c2ea6b3d884bd90b96287e06c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075444158751433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3836a19decee787d7cd4e27481d1676,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5,PodSandboxId:ca08ac66072f9a1e15f19674769d4b4ff7503f1c89fb800634c6bc7ec3a012af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075444088482866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c772bbb62054949d2fd93d6437431eb8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 33e1ff1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650,PodSandboxId:f20399c1b1127cc7a57a58e92e51e5fd2e3e8043e242562a57d81c3c9ca6594e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075444124813130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddeaa81ec9a2358ea082dc210cd7af0d,},Annotations:map[string]string{io.kubernetes.container.hash:
f161a577,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86,PodSandboxId:fda25866d81792a46d7118f7e7f6b3879e4e201ef7e13b4cece366dafffb67f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075444073895580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4947ab8541c12a4889282bf39fe1af10,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7161f4fe-6eeb-455c-8f18-65e49824ff0b name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.641173731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d264e54-417b-4113-8daa-32c18a41528c name=/runtime.v1.RuntimeService/Version
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.641295981Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d264e54-417b-4113-8daa-32c18a41528c name=/runtime.v1.RuntimeService/Version
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.642900658Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4bd17fa-a95a-4f5e-b229-438b02f40bbd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.643864567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076638643817814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4bd17fa-a95a-4f5e-b229-438b02f40bbd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.644703615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69f24290-cf37-4809-a1f3-e377c8a9e20c name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.644758067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69f24290-cf37-4809-a1f3-e377c8a9e20c name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:23:58 embed-certs-512173 crio[732]: time="2024-04-25 20:23:58.645184179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714075479620869366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c451d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e67ba6a4ea4e35e2b43f358a16c582b03342b863fa0cb48159052b28cb979308,PodSandboxId:135332d33750e30e406c5f99481716254aaf1e04169c75aa4f9559c6d6f27dcd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714075459338479741,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 09c7377f-44eb-4764-97e2-b21add69ffaf,},Annotations:map[string]string{io.kubernetes.container.hash: 46eec6e8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0,PodSandboxId:514fb8d1dca62bb204cf622d1239158567f838553285306b019e800412cb59b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714075456566393378,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xsptj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b974e5-9b6e-4647-81cc-4fd8aa94077c,},Annotations:map[string]string{io.kubernetes.container.hash: d5a36c9f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149,PodSandboxId:b1ddcd0c049a993aae5bdf0fbbad3dca6a34653633cb29359f94a3ade5f4b962,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714075448826715075,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8247p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc053d9-814c-4882-b
d11-5111e5a72635,},Annotations:map[string]string{io.kubernetes.container.hash: b4aae625,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934,PodSandboxId:1fd7b8630b1b2195a5e8fbcd12a3181abceb0c8e6d0d793a87bedc9ded44df4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714075448820410597,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cd233f-57aa-4438-b18d-9b82f57c4
51d,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9df5c5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4,PodSandboxId:d910c794e803aa51440b28e285bb1585be2f856c2ea6b3d884bd90b96287e06c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714075444158751433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3836a19decee787d7cd4e27481d1676,},Annota
tions:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5,PodSandboxId:ca08ac66072f9a1e15f19674769d4b4ff7503f1c89fb800634c6bc7ec3a012af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714075444088482866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c772bbb62054949d2fd93d6437431eb8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 33e1ff1d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650,PodSandboxId:f20399c1b1127cc7a57a58e92e51e5fd2e3e8043e242562a57d81c3c9ca6594e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714075444124813130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddeaa81ec9a2358ea082dc210cd7af0d,},Annotations:map[string]string{io.kubernetes.container.hash:
f161a577,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86,PodSandboxId:fda25866d81792a46d7118f7e7f6b3879e4e201ef7e13b4cece366dafffb67f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714075444073895580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-512173,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4947ab8541c12a4889282bf39fe1af10,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69f24290-cf37-4809-a1f3-e377c8a9e20c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf330fbdb7c0d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   1fd7b8630b1b2       storage-provisioner
	e67ba6a4ea4e3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   135332d33750e       busybox
	8acd5626916a2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   514fb8d1dca62       coredns-7db6d8ff4d-xsptj
	1c3e9dc1ffc5f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      19 minutes ago      Running             kube-proxy                1                   b1ddcd0c049a9       kube-proxy-8247p
	84313d4e49ed1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   1fd7b8630b1b2       storage-provisioner
	3bae27a3c70b5       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      19 minutes ago      Running             kube-scheduler            1                   d910c794e803a       kube-scheduler-embed-certs-512173
	26f6a9b78dc23       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      19 minutes ago      Running             etcd                      1                   f20399c1b1127       etcd-embed-certs-512173
	911aab4d436ac       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      19 minutes ago      Running             kube-apiserver            1                   ca08ac66072f9       kube-apiserver-embed-certs-512173
	df45510448ab3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      19 minutes ago      Running             kube-controller-manager   1                   fda25866d8179       kube-controller-manager-embed-certs-512173
	
	
	==> coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36306 - 2835 "HINFO IN 6010454245023336192.5364635277441556275. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015831528s
	
	
	==> describe nodes <==
	Name:               embed-certs-512173
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-512173
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7
	                    minikube.k8s.io/name=embed-certs-512173
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_25T19_54_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Apr 2024 19:54:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-512173
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Apr 2024 20:23:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Apr 2024 20:19:56 +0000   Thu, 25 Apr 2024 19:54:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Apr 2024 20:19:56 +0000   Thu, 25 Apr 2024 19:54:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Apr 2024 20:19:56 +0000   Thu, 25 Apr 2024 19:54:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Apr 2024 20:19:56 +0000   Thu, 25 Apr 2024 20:04:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.7
	  Hostname:    embed-certs-512173
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a5b1d22c0c3443eb3283716fbcc51d0
	  System UUID:                1a5b1d22-c0c3-443e-b328-3716fbcc51d0
	  Boot ID:                    76e3f5ae-a8e6-4c4b-9e2a-5797bfe9b570
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-xsptj                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-embed-certs-512173                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-512173             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-512173    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-8247p                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-embed-certs-512173             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-mlkqr               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node embed-certs-512173 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node embed-certs-512173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node embed-certs-512173 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-512173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-512173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-512173 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node embed-certs-512173 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-512173 event: Registered Node embed-certs-512173 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node embed-certs-512173 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node embed-certs-512173 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node embed-certs-512173 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-512173 event: Registered Node embed-certs-512173 in Controller
	
	
	==> dmesg <==
	[Apr25 20:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062050] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049471] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.203093] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.631235] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.776532] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.526277] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.065973] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074866] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.213124] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.135683] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.321357] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[Apr25 20:04] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.068831] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.406962] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +5.624452] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.964307] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +1.747342] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.728604] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] <==
	{"level":"info","ts":"2024-04-25T20:04:06.047071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-25T20:04:06.047124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgPreVoteResp from 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-04-25T20:04:06.047159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became candidate at term 3"}
	{"level":"info","ts":"2024-04-25T20:04:06.047183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 3"}
	{"level":"info","ts":"2024-04-25T20:04:06.04721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became leader at term 3"}
	{"level":"info","ts":"2024-04-25T20:04:06.047235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 3"}
	{"level":"info","ts":"2024-04-25T20:04:06.093136Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T20:04:06.094101Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"856b77cd5251110c","local-member-attributes":"{Name:embed-certs-512173 ClientURLs:[https://192.168.50.7:2379]}","request-path":"/0/members/856b77cd5251110c/attributes","cluster-id":"b162f841703ff885","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-25T20:04:06.094273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-25T20:04:06.094676Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-25T20:04:06.094718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-25T20:04:06.096329Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-25T20:04:06.098033Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.7:2379"}
	{"level":"info","ts":"2024-04-25T20:14:06.142394Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":841}
	{"level":"info","ts":"2024-04-25T20:14:06.155115Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":841,"took":"11.595431ms","hash":4141422069,"current-db-size-bytes":2617344,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2617344,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-04-25T20:14:06.155211Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4141422069,"revision":841,"compact-revision":-1}
	{"level":"info","ts":"2024-04-25T20:19:06.150977Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1083}
	{"level":"info","ts":"2024-04-25T20:19:06.156649Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1083,"took":"5.052494ms","hash":702826852,"current-db-size-bytes":2617344,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-25T20:19:06.156767Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":702826852,"revision":1083,"compact-revision":841}
	{"level":"warn","ts":"2024-04-25T20:22:55.451576Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.346242ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-25T20:22:55.451795Z","caller":"traceutil/trace.go:171","msg":"trace[788660943] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1513; }","duration":"115.632669ms","start":"2024-04-25T20:22:55.336107Z","end":"2024-04-25T20:22:55.45174Z","steps":["trace[788660943] 'range keys from in-memory index tree'  (duration: 115.297763ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-25T20:22:56.238652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.485709ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1228514126702402762 > lease_revoke:<id:110c8f16dba55c7e>","response":"size:29"}
	{"level":"info","ts":"2024-04-25T20:22:56.239122Z","caller":"traceutil/trace.go:171","msg":"trace[1232719855] linearizableReadLoop","detail":"{readStateIndex:1781; appliedIndex:1780; }","duration":"179.857365ms","start":"2024-04-25T20:22:56.059247Z","end":"2024-04-25T20:22:56.239105Z","steps":["trace[1232719855] 'read index received'  (duration: 74.102µs)","trace[1232719855] 'applied index is now lower than readState.Index'  (duration: 179.781908ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-25T20:22:56.239488Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.264693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-25T20:22:56.239568Z","caller":"traceutil/trace.go:171","msg":"trace[1614679253] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1513; }","duration":"180.384512ms","start":"2024-04-25T20:22:56.059176Z","end":"2024-04-25T20:22:56.23956Z","steps":["trace[1614679253] 'agreement among raft nodes before linearized reading'  (duration: 180.270441ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:23:59 up 20 min,  0 users,  load average: 0.20, 0.29, 0.21
	Linux embed-certs-512173 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] <==
	I0425 20:17:08.535526       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:19:07.537462       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:19:07.537604       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0425 20:19:08.538618       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:19:08.538678       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:19:08.538688       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:19:08.538730       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:19:08.538775       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:19:08.540044       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:20:08.539839       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:20:08.539892       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:20:08.539900       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:20:08.541066       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:20:08.541203       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:20:08.541236       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0425 20:22:08.541048       1 handler_proxy.go:93] no RequestInfo found in the context
	W0425 20:22:08.541608       1 handler_proxy.go:93] no RequestInfo found in the context
	E0425 20:22:08.541652       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0425 20:22:08.541674       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0425 20:22:08.541892       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0425 20:22:08.543366       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] <==
	I0425 20:18:22.795580       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:18:52.102327       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:18:52.804736       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:19:22.108295       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:19:22.814806       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:19:52.113191       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:19:52.823382       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:20:22.121693       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:20:22.831535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0425 20:20:26.425692       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="196.508µs"
	I0425 20:20:40.423160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="88.796µs"
	E0425 20:20:52.127127       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:20:52.841027       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:21:22.133081       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:21:22.849085       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:21:52.138537       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:21:52.858768       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:22:22.149234       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:22:22.867608       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:22:52.159800       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:22:52.876652       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:23:22.168058       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:23:22.888362       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0425 20:23:52.175570       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0425 20:23:52.898362       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] <==
	I0425 20:04:09.010868       1 server_linux.go:69] "Using iptables proxy"
	I0425 20:04:09.019092       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.7"]
	I0425 20:04:09.061614       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0425 20:04:09.061642       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0425 20:04:09.061656       1 server_linux.go:165] "Using iptables Proxier"
	I0425 20:04:09.065135       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0425 20:04:09.065366       1 server.go:872] "Version info" version="v1.30.0"
	I0425 20:04:09.065418       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 20:04:09.066642       1 config.go:192] "Starting service config controller"
	I0425 20:04:09.067743       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0425 20:04:09.067499       1 config.go:319] "Starting node config controller"
	I0425 20:04:09.067992       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0425 20:04:09.067216       1 config.go:101] "Starting endpoint slice config controller"
	I0425 20:04:09.068213       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0425 20:04:09.169006       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0425 20:04:09.169081       1 shared_informer.go:320] Caches are synced for service config
	I0425 20:04:09.169324       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] <==
	I0425 20:04:05.303751       1 serving.go:380] Generated self-signed cert in-memory
	W0425 20:04:07.470863       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0425 20:04:07.470994       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0425 20:04:07.471013       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0425 20:04:07.471019       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0425 20:04:07.544638       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0425 20:04:07.544688       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0425 20:04:07.547143       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0425 20:04:07.547428       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0425 20:04:07.547553       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0425 20:04:07.547670       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0425 20:04:07.648014       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 25 20:21:03 embed-certs-512173 kubelet[953]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:21:05 embed-certs-512173 kubelet[953]: E0425 20:21:05.411493     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:21:17 embed-certs-512173 kubelet[953]: E0425 20:21:17.409216     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:21:29 embed-certs-512173 kubelet[953]: E0425 20:21:29.410231     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:21:40 embed-certs-512173 kubelet[953]: E0425 20:21:40.408236     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:21:55 embed-certs-512173 kubelet[953]: E0425 20:21:55.411233     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:22:03 embed-certs-512173 kubelet[953]: E0425 20:22:03.437861     953 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:22:03 embed-certs-512173 kubelet[953]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:22:03 embed-certs-512173 kubelet[953]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:22:03 embed-certs-512173 kubelet[953]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:22:03 embed-certs-512173 kubelet[953]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:22:09 embed-certs-512173 kubelet[953]: E0425 20:22:09.409613     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:22:20 embed-certs-512173 kubelet[953]: E0425 20:22:20.408845     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:22:35 embed-certs-512173 kubelet[953]: E0425 20:22:35.408566     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:22:47 embed-certs-512173 kubelet[953]: E0425 20:22:47.408305     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:23:02 embed-certs-512173 kubelet[953]: E0425 20:23:02.408734     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:23:03 embed-certs-512173 kubelet[953]: E0425 20:23:03.441881     953 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 25 20:23:03 embed-certs-512173 kubelet[953]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 25 20:23:03 embed-certs-512173 kubelet[953]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 25 20:23:03 embed-certs-512173 kubelet[953]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 25 20:23:03 embed-certs-512173 kubelet[953]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 25 20:23:13 embed-certs-512173 kubelet[953]: E0425 20:23:13.409051     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:23:28 embed-certs-512173 kubelet[953]: E0425 20:23:28.410360     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:23:43 embed-certs-512173 kubelet[953]: E0425 20:23:43.408757     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	Apr 25 20:23:56 embed-certs-512173 kubelet[953]: E0425 20:23:56.412014     953 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-mlkqr" podUID="85113896-4f9c-4b53-8bc9-c138b8a643fc"
	
	
	==> storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] <==
	I0425 20:04:08.952379       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0425 20:04:38.955526       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] <==
	I0425 20:04:39.726353       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0425 20:04:39.735143       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0425 20:04:39.736194       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0425 20:04:57.138090       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0425 20:04:57.138319       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-512173_f2e61450-39de-4da9-bd72-e7b218a0ab19!
	I0425 20:04:57.140852       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f560c90-9231-48af-a706-8beaa9fbf6e0", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-512173_f2e61450-39de-4da9-bd72-e7b218a0ab19 became leader
	I0425 20:04:57.239218       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-512173_f2e61450-39de-4da9-bd72-e7b218a0ab19!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-512173 -n embed-certs-512173
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-512173 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-mlkqr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-512173 describe pod metrics-server-569cc877fc-mlkqr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-512173 describe pod metrics-server-569cc877fc-mlkqr: exit status 1 (68.997283ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-mlkqr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-512173 describe pod metrics-server-569cc877fc-mlkqr: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (379.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (91.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:21:11.710455   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
E0425 20:21:48.448959   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.136:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.136:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210442 -n old-k8s-version-210442
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 2 (251.496791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-210442" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-210442 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-210442 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.084µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-210442 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 2 (241.598016ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-210442 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-210442 logs -n 25: (1.631188153s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-120641 sudo cat                             | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo                                 | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo find                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-120641 sudo crio                            | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-120641                                      | flannel-120641               | jenkins | v1.33.0 | 25 Apr 24 19:53 UTC | 25 Apr 24 19:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-113000 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:54 UTC |
	|         | disable-driver-mounts-113000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:54 UTC | 25 Apr 24 19:55 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-512173            | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-744552             | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-142196  | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC | 25 Apr 24 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:55 UTC |                     |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-210442        | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-512173                 | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-512173                                  | embed-certs-512173           | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-744552                  | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-142196       | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-744552                                   | no-preload-744552            | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-142196 | jenkins | v1.33.0 | 25 Apr 24 19:58 UTC | 25 Apr 24 20:07 UTC |
	|         | default-k8s-diff-port-142196                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-210442             | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC | 25 Apr 24 19:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-210442                              | old-k8s-version-210442       | jenkins | v1.33.0 | 25 Apr 24 19:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 19:59:17
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 19:59:17.353932   72712 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:59:17.354045   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354055   72712 out.go:304] Setting ErrFile to fd 2...
	I0425 19:59:17.354059   72712 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:59:17.354269   72712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:59:17.354795   72712 out.go:298] Setting JSON to false
	I0425 19:59:17.355681   72712 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6103,"bootTime":1714069054,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:59:17.355740   72712 start.go:139] virtualization: kvm guest
	I0425 19:59:17.357921   72712 out.go:177] * [old-k8s-version-210442] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:59:17.359325   72712 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:59:17.360640   72712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:59:17.359305   72712 notify.go:220] Checking for updates...
	I0425 19:59:17.361801   72712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:59:17.363086   72712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:59:17.364512   72712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:59:17.365842   72712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:59:17.367508   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 19:59:17.367909   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.367946   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.382995   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
	I0425 19:59:17.383362   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.383991   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.384016   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.384378   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.384566   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.386317   72712 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0425 19:59:17.387599   72712 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:59:17.387904   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:59:17.387948   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:59:17.402999   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0425 19:59:17.403506   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:59:17.403962   72712 main.go:141] libmachine: Using API Version  1
	I0425 19:59:17.403986   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:59:17.404318   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:59:17.404472   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 19:59:17.438308   72712 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 19:59:17.439686   72712 start.go:297] selected driver: kvm2
	I0425 19:59:17.439716   72712 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.439831   72712 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:59:17.440486   72712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.440553   72712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 19:59:17.454719   72712 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 19:59:17.455114   72712 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 19:59:17.455184   72712 cni.go:84] Creating CNI manager for ""
	I0425 19:59:17.455203   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 19:59:17.455266   72712 start.go:340] cluster config:
	{Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 19:59:17.455393   72712 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 19:59:17.457210   72712 out.go:177] * Starting "old-k8s-version-210442" primary control-plane node in "old-k8s-version-210442" cluster
	I0425 19:59:18.474583   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:17.458384   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 19:59:17.458418   72712 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0425 19:59:17.458430   72712 cache.go:56] Caching tarball of preloaded images
	I0425 19:59:17.458517   72712 preload.go:173] Found /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0425 19:59:17.458529   72712 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0425 19:59:17.458638   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 19:59:17.458844   72712 start.go:360] acquireMachinesLock for old-k8s-version-210442: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 19:59:24.554517   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:27.626446   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:33.706451   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:36.778527   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:42.858471   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:45.930403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:52.010482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 19:59:55.082403   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:01.162466   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:04.234537   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:10.314506   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:13.386463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:19.466523   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:22.538461   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:28.622423   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:31.690489   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:37.770534   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:40.842458   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:46.922463   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:49.994524   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:56.074478   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:00:59.146487   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:05.226452   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:08.298480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:14.378455   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:17.450469   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:23.530513   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:26.602470   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:32.682497   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:35.754500   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:41.834480   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:44.906482   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:50.986468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:01:54.058502   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:00.138459   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:03.210554   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:09.290491   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:12.362472   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:18.442476   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:21.514468   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.599158   72220 start.go:364] duration metric: took 4m21.632012686s to acquireMachinesLock for "no-preload-744552"
	I0425 20:02:30.599206   72220 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:30.599212   72220 fix.go:54] fixHost starting: 
	I0425 20:02:30.599516   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:30.599545   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:30.614130   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0425 20:02:30.614502   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:30.614962   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:02:30.614979   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:30.615306   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:30.615513   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:30.615640   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:02:30.617129   72220 fix.go:112] recreateIfNeeded on no-preload-744552: state=Stopped err=<nil>
	I0425 20:02:30.617150   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	W0425 20:02:30.617300   72220 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:30.619253   72220 out.go:177] * Restarting existing kvm2 VM for "no-preload-744552" ...
	I0425 20:02:27.594454   71966 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.7:22: connect: no route to host
	I0425 20:02:30.596600   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:30.596654   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.596986   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:02:30.597016   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:02:30.597206   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:02:30.599042   71966 machine.go:97] duration metric: took 4m44.620242563s to provisionDockerMachine
	I0425 20:02:30.599079   71966 fix.go:56] duration metric: took 4m44.639860566s for fixHost
	I0425 20:02:30.599085   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 4m44.639890108s
	W0425 20:02:30.599104   71966 start.go:713] error starting host: provision: host is not running
	W0425 20:02:30.599182   71966 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0425 20:02:30.599192   71966 start.go:728] Will try again in 5 seconds ...
	I0425 20:02:30.620801   72220 main.go:141] libmachine: (no-preload-744552) Calling .Start
	I0425 20:02:30.620978   72220 main.go:141] libmachine: (no-preload-744552) Ensuring networks are active...
	I0425 20:02:30.621640   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network default is active
	I0425 20:02:30.621965   72220 main.go:141] libmachine: (no-preload-744552) Ensuring network mk-no-preload-744552 is active
	I0425 20:02:30.622317   72220 main.go:141] libmachine: (no-preload-744552) Getting domain xml...
	I0425 20:02:30.623010   72220 main.go:141] libmachine: (no-preload-744552) Creating domain...
	I0425 20:02:31.809967   72220 main.go:141] libmachine: (no-preload-744552) Waiting to get IP...
	I0425 20:02:31.810856   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:31.811353   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:31.811403   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:31.811308   73381 retry.go:31] will retry after 294.641704ms: waiting for machine to come up
	I0425 20:02:32.107955   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.108508   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.108542   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.108449   73381 retry.go:31] will retry after 373.307428ms: waiting for machine to come up
	I0425 20:02:32.483111   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.483590   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.483619   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.483546   73381 retry.go:31] will retry after 484.455862ms: waiting for machine to come up
	I0425 20:02:32.969188   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:32.969657   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:32.969694   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:32.969602   73381 retry.go:31] will retry after 382.359725ms: waiting for machine to come up
	I0425 20:02:33.353143   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.353598   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.353621   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.353550   73381 retry.go:31] will retry after 515.389674ms: waiting for machine to come up
	I0425 20:02:35.602273   71966 start.go:360] acquireMachinesLock for embed-certs-512173: {Name:mkc8fa3fe157ac0fd8735332d47b1b77ddc30348 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0425 20:02:33.870172   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:33.870652   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:33.870676   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:33.870603   73381 retry.go:31] will retry after 714.032032ms: waiting for machine to come up
	I0425 20:02:34.586478   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:34.586833   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:34.586861   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:34.586791   73381 retry.go:31] will retry after 1.005122465s: waiting for machine to come up
	I0425 20:02:35.593962   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:35.594367   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:35.594400   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:35.594310   73381 retry.go:31] will retry after 1.483740326s: waiting for machine to come up
	I0425 20:02:37.079306   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:37.079751   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:37.079784   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:37.079700   73381 retry.go:31] will retry after 1.828802911s: waiting for machine to come up
	I0425 20:02:38.910631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:38.911138   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:38.911163   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:38.911086   73381 retry.go:31] will retry after 1.528405609s: waiting for machine to come up
	I0425 20:02:40.441741   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:40.442251   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:40.442277   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:40.442200   73381 retry.go:31] will retry after 2.817901976s: waiting for machine to come up
	I0425 20:02:43.263903   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:43.264376   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:43.264408   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:43.264324   73381 retry.go:31] will retry after 2.258888981s: waiting for machine to come up
	I0425 20:02:45.525701   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:45.526139   72220 main.go:141] libmachine: (no-preload-744552) DBG | unable to find current IP address of domain no-preload-744552 in network mk-no-preload-744552
	I0425 20:02:45.526168   72220 main.go:141] libmachine: (no-preload-744552) DBG | I0425 20:02:45.526106   73381 retry.go:31] will retry after 4.008258204s: waiting for machine to come up
	I0425 20:02:50.951421   72304 start.go:364] duration metric: took 4m34.5614094s to acquireMachinesLock for "default-k8s-diff-port-142196"
	I0425 20:02:50.951491   72304 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:02:50.951500   72304 fix.go:54] fixHost starting: 
	I0425 20:02:50.951906   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:02:50.951944   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:02:50.968074   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33481
	I0425 20:02:50.968452   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:02:50.968862   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:02:50.968886   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:02:50.969238   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:02:50.969460   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:02:50.969622   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:02:50.971100   72304 fix.go:112] recreateIfNeeded on default-k8s-diff-port-142196: state=Stopped err=<nil>
	I0425 20:02:50.971125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	W0425 20:02:50.971271   72304 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:02:50.974623   72304 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-142196" ...
	I0425 20:02:50.975991   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Start
	I0425 20:02:50.976154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring networks are active...
	I0425 20:02:50.976794   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network default is active
	I0425 20:02:50.977111   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Ensuring network mk-default-k8s-diff-port-142196 is active
	I0425 20:02:50.977490   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Getting domain xml...
	I0425 20:02:50.978200   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Creating domain...
	I0425 20:02:49.538522   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.538999   72220 main.go:141] libmachine: (no-preload-744552) Found IP for machine: 192.168.72.142
	I0425 20:02:49.539033   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has current primary IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.539043   72220 main.go:141] libmachine: (no-preload-744552) Reserving static IP address...
	I0425 20:02:49.539420   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.539458   72220 main.go:141] libmachine: (no-preload-744552) DBG | skip adding static IP to network mk-no-preload-744552 - found existing host DHCP lease matching {name: "no-preload-744552", mac: "52:54:00:2f:c5:04", ip: "192.168.72.142"}
	I0425 20:02:49.539469   72220 main.go:141] libmachine: (no-preload-744552) Reserved static IP address: 192.168.72.142
	I0425 20:02:49.539483   72220 main.go:141] libmachine: (no-preload-744552) Waiting for SSH to be available...
	I0425 20:02:49.539490   72220 main.go:141] libmachine: (no-preload-744552) DBG | Getting to WaitForSSH function...
	I0425 20:02:49.541631   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542042   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.542073   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.542221   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH client type: external
	I0425 20:02:49.542270   72220 main.go:141] libmachine: (no-preload-744552) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa (-rw-------)
	I0425 20:02:49.542300   72220 main.go:141] libmachine: (no-preload-744552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.142 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:02:49.542316   72220 main.go:141] libmachine: (no-preload-744552) DBG | About to run SSH command:
	I0425 20:02:49.542334   72220 main.go:141] libmachine: (no-preload-744552) DBG | exit 0
	I0425 20:02:49.670034   72220 main.go:141] libmachine: (no-preload-744552) DBG | SSH cmd err, output: <nil>: 
	I0425 20:02:49.670414   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetConfigRaw
	I0425 20:02:49.671039   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:49.673279   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673592   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.673629   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.673878   72220 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/config.json ...
	I0425 20:02:49.674066   72220 machine.go:94] provisionDockerMachine start ...
	I0425 20:02:49.674083   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:49.674317   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.676767   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677084   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.677115   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.677238   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.677413   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677562   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.677698   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.677841   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.678037   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.678049   72220 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:02:49.790734   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:02:49.790764   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791028   72220 buildroot.go:166] provisioning hostname "no-preload-744552"
	I0425 20:02:49.791061   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:49.791248   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.793907   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794279   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.794313   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.794450   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.794649   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794787   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.794908   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.795054   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.795256   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.795277   72220 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-744552 && echo "no-preload-744552" | sudo tee /etc/hostname
	I0425 20:02:49.925459   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744552
	
	I0425 20:02:49.925483   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:49.928282   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928646   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:49.928680   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:49.928831   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:49.929012   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929194   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:49.929327   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:49.929481   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:49.929679   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:49.929709   72220 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-744552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-744552/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-744552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:02:50.052805   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:02:50.052841   72220 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:02:50.052861   72220 buildroot.go:174] setting up certificates
	I0425 20:02:50.052875   72220 provision.go:84] configureAuth start
	I0425 20:02:50.052887   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetMachineName
	I0425 20:02:50.053193   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.055800   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056145   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.056168   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.056339   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.058090   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058395   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.058429   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.058526   72220 provision.go:143] copyHostCerts
	I0425 20:02:50.058577   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:02:50.058587   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:02:50.058647   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:02:50.058742   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:02:50.058750   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:02:50.058774   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:02:50.058827   72220 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:02:50.058834   72220 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:02:50.058855   72220 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:02:50.058904   72220 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.no-preload-744552 san=[127.0.0.1 192.168.72.142 localhost minikube no-preload-744552]
	I0425 20:02:50.247711   72220 provision.go:177] copyRemoteCerts
	I0425 20:02:50.247768   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:02:50.247792   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.250146   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250560   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.250600   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.250780   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.250978   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.251128   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.251272   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.338105   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:02:50.365554   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0425 20:02:50.391433   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:02:50.416606   72220 provision.go:87] duration metric: took 363.720332ms to configureAuth
	I0425 20:02:50.416627   72220 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:02:50.416795   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:02:50.416876   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.419385   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419731   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.419764   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.419903   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.420079   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420322   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.420557   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.420724   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.420909   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.420929   72220 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:02:50.702065   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:02:50.702104   72220 machine.go:97] duration metric: took 1.028026584s to provisionDockerMachine
	I0425 20:02:50.702117   72220 start.go:293] postStartSetup for "no-preload-744552" (driver="kvm2")
	I0425 20:02:50.702131   72220 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:02:50.702165   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.702531   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:02:50.702572   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.705595   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.705948   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.705992   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.706173   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.706367   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.706588   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.706759   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.794791   72220 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:02:50.799592   72220 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:02:50.799621   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:02:50.799701   72220 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:02:50.799799   72220 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:02:50.799913   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:02:50.810796   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:02:50.836919   72220 start.go:296] duration metric: took 134.787005ms for postStartSetup
	I0425 20:02:50.836972   72220 fix.go:56] duration metric: took 20.237758066s for fixHost
	I0425 20:02:50.836995   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.839818   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840295   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.840325   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.840429   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.840600   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840752   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.840929   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.841079   72220 main.go:141] libmachine: Using SSH client type: native
	I0425 20:02:50.841307   72220 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.142 22 <nil> <nil>}
	I0425 20:02:50.841338   72220 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:02:50.951251   72220 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075370.921171901
	
	I0425 20:02:50.951272   72220 fix.go:216] guest clock: 1714075370.921171901
	I0425 20:02:50.951279   72220 fix.go:229] Guest: 2024-04-25 20:02:50.921171901 +0000 UTC Remote: 2024-04-25 20:02:50.836976462 +0000 UTC m=+282.018789867 (delta=84.195439ms)
	I0425 20:02:50.951312   72220 fix.go:200] guest clock delta is within tolerance: 84.195439ms
	I0425 20:02:50.951321   72220 start.go:83] releasing machines lock for "no-preload-744552", held for 20.352126868s
	I0425 20:02:50.951348   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.951612   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:50.954231   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954614   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.954638   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.954821   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955240   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955419   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:02:50.955492   72220 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:02:50.955540   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.955659   72220 ssh_runner.go:195] Run: cat /version.json
	I0425 20:02:50.955688   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:02:50.958155   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958476   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958517   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958541   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.958661   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.958808   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.958903   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:50.958932   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.958935   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:50.959045   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:02:50.959181   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:50.959192   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:02:50.959360   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:02:50.959471   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:02:51.066809   72220 ssh_runner.go:195] Run: systemctl --version
	I0425 20:02:51.073198   72220 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:02:51.228547   72220 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:02:51.236443   72220 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:02:51.236518   72220 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:02:51.256226   72220 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:02:51.256244   72220 start.go:494] detecting cgroup driver to use...
	I0425 20:02:51.256307   72220 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:02:51.278596   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:02:51.295692   72220 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:02:51.295751   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:02:51.310940   72220 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:02:51.326072   72220 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:02:51.459064   72220 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:02:51.614563   72220 docker.go:233] disabling docker service ...
	I0425 20:02:51.614639   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:02:51.638817   72220 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:02:51.658265   72220 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:02:51.818412   72220 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:02:51.943830   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:02:51.960672   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:02:51.982028   72220 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:02:51.982090   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:51.994990   72220 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:02:51.995079   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.007907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.020225   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.033306   72220 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:02:52.046241   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.058282   72220 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.078907   72220 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:02:52.090258   72220 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:02:52.100796   72220 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:02:52.100873   72220 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:02:52.115600   72220 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:02:52.125458   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:02:52.288142   72220 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:02:52.430252   72220 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:02:52.430353   72220 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:02:52.436493   72220 start.go:562] Will wait 60s for crictl version
	I0425 20:02:52.436565   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.441427   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:02:52.479709   72220 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:02:52.479810   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.512180   72220 ssh_runner.go:195] Run: crio --version
	I0425 20:02:52.545115   72220 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:02:52.546476   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetIP
	I0425 20:02:52.549314   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549723   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:02:52.549759   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:02:52.549926   72220 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0425 20:02:52.554924   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:02:52.568804   72220 kubeadm.go:877] updating cluster {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:02:52.568958   72220 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:02:52.568997   72220 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:02:52.609095   72220 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:02:52.609117   72220 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:02:52.609156   72220 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.609188   72220 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.609185   72220 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.609214   72220 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.609227   72220 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.609256   72220 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.609334   72220 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.609370   72220 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610726   72220 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.610747   72220 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0425 20:02:52.610772   72220 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.610724   72220 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.610800   72220 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.610807   72220 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.611075   72220 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.611096   72220 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:52.753069   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0425 20:02:52.771762   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.825052   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908030   72220 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0425 20:02:52.908082   72220 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.908113   72220 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0425 20:02:52.908127   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.908135   72220 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.908164   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:52.915126   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0425 20:02:52.915132   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0425 20:02:52.967834   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:52.969385   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:52.973718   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0425 20:02:52.973787   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0425 20:02:52.973823   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:52.973870   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:52.985763   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:52.986695   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.068153   72220 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0425 20:02:53.068196   72220 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.068269   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099237   72220 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0425 20:02:53.099257   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0425 20:02:53.099274   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099290   72220 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:53.099294   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0425 20:02:53.099330   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0425 20:02:53.099368   72220 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0425 20:02:53.099401   72220 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:53.099433   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.099333   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.115478   72220 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0425 20:02:53.115523   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0425 20:02:53.115526   72220 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:53.115610   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:53.550328   72220 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:52.240552   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting to get IP...
	I0425 20:02:52.241327   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241657   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.241757   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.241648   73527 retry.go:31] will retry after 195.006273ms: waiting for machine to come up
	I0425 20:02:52.438154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438702   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.438726   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.438657   73527 retry.go:31] will retry after 365.911905ms: waiting for machine to come up
	I0425 20:02:52.806281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806793   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:52.806826   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:52.806727   73527 retry.go:31] will retry after 448.572137ms: waiting for machine to come up
	I0425 20:02:53.257396   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257935   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.257966   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.257889   73527 retry.go:31] will retry after 560.886917ms: waiting for machine to come up
	I0425 20:02:53.820527   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820954   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:53.820979   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:53.820915   73527 retry.go:31] will retry after 514.294303ms: waiting for machine to come up
	I0425 20:02:54.336706   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337129   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:54.337154   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:54.337101   73527 retry.go:31] will retry after 853.040726ms: waiting for machine to come up
	I0425 20:02:55.192349   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:55.192857   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:55.192774   73527 retry.go:31] will retry after 1.17554782s: waiting for machine to come up
	I0425 20:02:56.232794   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (3.133436829s)
	I0425 20:02:56.232845   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0425 20:02:56.232854   72220 ssh_runner.go:235] Completed: which crictl: (3.133373607s)
	I0425 20:02:56.232875   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232915   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0425 20:02:56.232961   72220 ssh_runner.go:235] Completed: which crictl: (3.133515676s)
	I0425 20:02:56.232919   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0425 20:02:56.233011   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0425 20:02:56.233050   72220 ssh_runner.go:235] Completed: which crictl: (3.11742497s)
	I0425 20:02:56.233089   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0425 20:02:56.233126   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (3.117580594s)
	I0425 20:02:56.233160   72220 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.6828061s)
	I0425 20:02:56.233167   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0425 20:02:56.233207   72220 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0425 20:02:56.233242   72220 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:56.233248   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:56.233284   72220 ssh_runner.go:195] Run: which crictl
	I0425 20:02:56.323764   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0425 20:02:56.323884   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:02:56.323906   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0425 20:02:56.323989   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:02:58.553707   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.320762887s)
	I0425 20:02:58.553742   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0425 20:02:58.553768   72220 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.320739179s)
	I0425 20:02:58.553784   72220 ssh_runner.go:235] Completed: which crictl: (2.320487912s)
	I0425 20:02:58.553807   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0425 20:02:58.553838   72220 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:02:58.553864   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0: (2.320587538s)
	I0425 20:02:58.553889   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:02:58.553909   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0425 20:02:58.553948   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.229944417s)
	I0425 20:02:58.553959   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553989   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0425 20:02:58.554009   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0425 20:02:58.553910   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.23000183s)
	I0425 20:02:58.554069   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0425 20:02:58.602692   72220 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0425 20:02:58.602694   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0425 20:02:58.602819   72220 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:02:56.369693   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:56.370169   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:56.370115   73527 retry.go:31] will retry after 1.260629487s: waiting for machine to come up
	I0425 20:02:57.632705   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633187   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:57.633215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:57.633150   73527 retry.go:31] will retry after 1.291948113s: waiting for machine to come up
	I0425 20:02:58.926675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927167   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:02:58.927196   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:02:58.927111   73527 retry.go:31] will retry after 1.869565597s: waiting for machine to come up
	I0425 20:03:00.799357   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799820   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:00.799850   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:00.799750   73527 retry.go:31] will retry after 2.157801293s: waiting for machine to come up
	I0425 20:03:00.027830   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.473790165s)
	I0425 20:03:00.027869   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0425 20:03:00.027895   72220 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027943   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0425 20:03:00.027842   72220 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.424998268s)
	I0425 20:03:00.027985   72220 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0425 20:03:02.204218   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.176247608s)
	I0425 20:03:02.204254   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0425 20:03:02.204290   72220 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.204335   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0425 20:03:02.959407   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959789   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:02.959812   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:02.959745   73527 retry.go:31] will retry after 2.617480271s: waiting for machine to come up
	I0425 20:03:05.579300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | unable to find current IP address of domain default-k8s-diff-port-142196 in network mk-default-k8s-diff-port-142196
	I0425 20:03:05.579852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | I0425 20:03:05.579775   73527 retry.go:31] will retry after 4.058370199s: waiting for machine to come up
	I0425 20:03:06.132743   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.928385447s)
	I0425 20:03:06.132779   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0425 20:03:06.132805   72220 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:06.132857   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0425 20:03:08.314803   72220 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.181910584s)
	I0425 20:03:08.314842   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0425 20:03:08.314881   72220 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:08.314930   72220 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0425 20:03:11.255486   72712 start.go:364] duration metric: took 3m53.796595105s to acquireMachinesLock for "old-k8s-version-210442"
	I0425 20:03:11.255550   72712 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:11.255569   72712 fix.go:54] fixHost starting: 
	I0425 20:03:11.256083   72712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:11.256128   72712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:11.272950   72712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0425 20:03:11.273365   72712 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:11.273878   72712 main.go:141] libmachine: Using API Version  1
	I0425 20:03:11.273907   72712 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:11.274277   72712 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:11.274487   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:11.274666   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetState
	I0425 20:03:11.276420   72712 fix.go:112] recreateIfNeeded on old-k8s-version-210442: state=Stopped err=<nil>
	I0425 20:03:11.276454   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	W0425 20:03:11.276608   72712 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:11.279156   72712 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-210442" ...
	I0425 20:03:09.639300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639833   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Found IP for machine: 192.168.39.123
	I0425 20:03:09.639867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has current primary IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.639884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserving static IP address...
	I0425 20:03:09.640257   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.640281   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | skip adding static IP to network mk-default-k8s-diff-port-142196 - found existing host DHCP lease matching {name: "default-k8s-diff-port-142196", mac: "52:54:00:10:24:a7", ip: "192.168.39.123"}
	I0425 20:03:09.640300   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Reserved static IP address: 192.168.39.123
	I0425 20:03:09.640313   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Waiting for SSH to be available...
	I0425 20:03:09.640321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Getting to WaitForSSH function...
	I0425 20:03:09.643058   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643371   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.643400   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.643506   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH client type: external
	I0425 20:03:09.643557   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa (-rw-------)
	I0425 20:03:09.643586   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:09.643609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | About to run SSH command:
	I0425 20:03:09.643618   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | exit 0
	I0425 20:03:09.766707   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:09.767091   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetConfigRaw
	I0425 20:03:09.767818   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:09.770573   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771012   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.771047   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.771296   72304 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/config.json ...
	I0425 20:03:09.771580   72304 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:09.771609   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:09.771884   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.774255   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.774699   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.774866   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.775044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775213   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.775362   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.775520   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.775781   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.775797   72304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:09.884259   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:09.884288   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884519   72304 buildroot.go:166] provisioning hostname "default-k8s-diff-port-142196"
	I0425 20:03:09.884547   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:09.884747   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:09.887391   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.887798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:09.887829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:09.888003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:09.888215   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:09.888542   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:09.888703   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:09.888918   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:09.888934   72304 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-142196 && echo "default-k8s-diff-port-142196" | sudo tee /etc/hostname
	I0425 20:03:10.015919   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-142196
	
	I0425 20:03:10.015951   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.018640   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.018955   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.018987   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.019201   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.019398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.019729   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.019906   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.020098   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.020120   72304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-142196' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-142196/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-142196' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:10.145789   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:10.145822   72304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:10.145873   72304 buildroot.go:174] setting up certificates
	I0425 20:03:10.145886   72304 provision.go:84] configureAuth start
	I0425 20:03:10.145899   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetMachineName
	I0425 20:03:10.146185   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:10.148943   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149309   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.149342   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.149492   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.152000   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152418   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.152445   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.152621   72304 provision.go:143] copyHostCerts
	I0425 20:03:10.152681   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:10.152693   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:10.152758   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:10.152890   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:10.152905   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:10.152940   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:10.153033   72304 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:10.153044   72304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:10.153072   72304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:10.153145   72304 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-142196 san=[127.0.0.1 192.168.39.123 default-k8s-diff-port-142196 localhost minikube]
	I0425 20:03:10.572412   72304 provision.go:177] copyRemoteCerts
	I0425 20:03:10.572473   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:10.572496   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.575083   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575395   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.575421   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.575560   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.575696   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.575799   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.575916   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:10.657850   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:10.685493   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0425 20:03:10.713230   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:10.740577   72304 provision.go:87] duration metric: took 594.674196ms to configureAuth
	I0425 20:03:10.740604   72304 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:10.740835   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:10.740916   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:10.743709   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744039   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:10.744071   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:10.744236   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:10.744434   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744621   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:10.744723   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:10.744901   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:10.745065   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:10.745083   72304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:11.017816   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:11.017844   72304 machine.go:97] duration metric: took 1.24624593s to provisionDockerMachine
	I0425 20:03:11.017858   72304 start.go:293] postStartSetup for "default-k8s-diff-port-142196" (driver="kvm2")
	I0425 20:03:11.017871   72304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:11.017892   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.018195   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:11.018231   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.020759   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021067   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.021092   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.021226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.021403   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.021600   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.021729   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.106290   72304 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:11.111532   72304 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:11.111560   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:11.111645   72304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:11.111744   72304 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:11.111856   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:11.122216   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:11.150472   72304 start.go:296] duration metric: took 132.600197ms for postStartSetup
	I0425 20:03:11.150520   72304 fix.go:56] duration metric: took 20.199020729s for fixHost
	I0425 20:03:11.150544   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.153466   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.153798   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.153824   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.154055   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.154289   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154483   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.154635   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.154824   72304 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:11.154991   72304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0425 20:03:11.155001   72304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:11.255330   72304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075391.221756501
	
	I0425 20:03:11.255357   72304 fix.go:216] guest clock: 1714075391.221756501
	I0425 20:03:11.255365   72304 fix.go:229] Guest: 2024-04-25 20:03:11.221756501 +0000 UTC Remote: 2024-04-25 20:03:11.15052524 +0000 UTC m=+294.908822896 (delta=71.231261ms)
	I0425 20:03:11.255384   72304 fix.go:200] guest clock delta is within tolerance: 71.231261ms
	I0425 20:03:11.255388   72304 start.go:83] releasing machines lock for "default-k8s-diff-port-142196", held for 20.303917474s
	I0425 20:03:11.255419   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.255700   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:11.258740   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259076   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.259104   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.259414   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.259906   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260102   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:11.260197   72304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:11.260241   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.260350   72304 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:11.260374   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:11.262843   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263001   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263216   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263245   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263365   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:11.263398   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:11.263480   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263669   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263679   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:11.263864   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:11.263867   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264026   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:11.264039   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.264203   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:11.280701   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .Start
	I0425 20:03:11.280895   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring networks are active...
	I0425 20:03:11.281729   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network default is active
	I0425 20:03:11.282158   72712 main.go:141] libmachine: (old-k8s-version-210442) Ensuring network mk-old-k8s-version-210442 is active
	I0425 20:03:11.282639   72712 main.go:141] libmachine: (old-k8s-version-210442) Getting domain xml...
	I0425 20:03:11.283399   72712 main.go:141] libmachine: (old-k8s-version-210442) Creating domain...
	I0425 20:03:11.339564   72304 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:11.364667   72304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:11.526308   72304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:11.533487   72304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:11.533563   72304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:11.552090   72304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:11.552120   72304 start.go:494] detecting cgroup driver to use...
	I0425 20:03:11.552196   72304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:11.569573   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:11.584425   72304 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:11.584489   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:11.599083   72304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:11.613739   72304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:11.739574   72304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:11.911318   72304 docker.go:233] disabling docker service ...
	I0425 20:03:11.911390   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:11.928743   72304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:11.946101   72304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:12.112740   72304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:12.246863   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:12.269551   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:12.298838   72304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:12.298907   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.312059   72304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:12.312113   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.324076   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.336239   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.350088   72304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:12.368362   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.385406   72304 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.407195   72304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:12.420065   72304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:12.431195   72304 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:12.431260   72304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:12.446263   72304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:12.457137   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:12.622756   72304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:12.799932   72304 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:12.800012   72304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:12.807795   72304 start.go:562] Will wait 60s for crictl version
	I0425 20:03:12.807862   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:03:12.813860   72304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:12.861249   72304 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:12.861327   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.896140   72304 ssh_runner.go:195] Run: crio --version
	I0425 20:03:12.942768   72304 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:09.079550   72220 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0425 20:03:09.079607   72220 cache_images.go:123] Successfully loaded all cached images
	I0425 20:03:09.079615   72220 cache_images.go:92] duration metric: took 16.470485982s to LoadCachedImages
	I0425 20:03:09.079629   72220 kubeadm.go:928] updating node { 192.168.72.142 8443 v1.30.0 crio true true} ...
	I0425 20:03:09.079764   72220 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-744552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:09.079839   72220 ssh_runner.go:195] Run: crio config
	I0425 20:03:09.139170   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:09.139194   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:09.139206   72220 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:09.139225   72220 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.142 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-744552 NodeName:no-preload-744552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:09.139365   72220 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-744552"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.142
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.142"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:09.139426   72220 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:09.151828   72220 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:09.151884   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:09.163310   72220 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0425 20:03:09.183132   72220 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:09.203038   72220 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0425 20:03:09.223717   72220 ssh_runner.go:195] Run: grep 192.168.72.142	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:09.228467   72220 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:09.243976   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:09.361475   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:09.380862   72220 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552 for IP: 192.168.72.142
	I0425 20:03:09.380886   72220 certs.go:194] generating shared ca certs ...
	I0425 20:03:09.380901   72220 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:09.381076   72220 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:09.381132   72220 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:09.381147   72220 certs.go:256] generating profile certs ...
	I0425 20:03:09.381254   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/client.key
	I0425 20:03:09.381337   72220 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key.a705cb96
	I0425 20:03:09.381392   72220 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key
	I0425 20:03:09.381538   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:09.381586   72220 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:09.381601   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:09.381638   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:09.381668   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:09.381702   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:09.381761   72220 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:09.382459   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:09.423895   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:09.462481   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:09.491394   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:09.532779   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0425 20:03:09.569107   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0425 20:03:09.597381   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:09.623962   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/no-preload-744552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:09.651141   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:09.677295   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:09.702404   72220 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:09.729275   72220 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:09.748421   72220 ssh_runner.go:195] Run: openssl version
	I0425 20:03:09.754848   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:09.768121   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774468   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.774529   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:09.783568   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:09.799120   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:09.812983   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818660   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.818740   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:09.826091   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:09.840115   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:09.853372   72220 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858387   72220 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.858455   72220 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:09.864693   72220 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:09.876755   72220 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:09.882829   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:09.890219   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:09.897091   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:09.906017   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:09.913154   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:09.919989   72220 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:09.926552   72220 kubeadm.go:391] StartCluster: {Name:no-preload-744552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-744552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:09.926671   72220 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:09.926734   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:09.971983   72220 cri.go:89] found id: ""
	I0425 20:03:09.972071   72220 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:09.983371   72220 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:09.983399   72220 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:09.983406   72220 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:09.983451   72220 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:09.994047   72220 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:09.995080   72220 kubeconfig.go:125] found "no-preload-744552" server: "https://192.168.72.142:8443"
	I0425 20:03:09.997202   72220 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:10.007666   72220 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.142
	I0425 20:03:10.007703   72220 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:10.007713   72220 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:10.007752   72220 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:10.049581   72220 cri.go:89] found id: ""
	I0425 20:03:10.049679   72220 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:10.071032   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:10.083240   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:10.083267   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:10.083314   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:10.093444   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:10.093507   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:10.104291   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:10.114596   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:10.114659   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:10.125118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.138299   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:10.138362   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:10.152185   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:10.163493   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:10.163555   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:10.177214   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:10.188286   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:10.312536   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.497483   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.184911769s)
	I0425 20:03:11.497531   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.753732   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.871246   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:11.968366   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:11.968445   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.468885   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:12.968598   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:13.037502   72220 api_server.go:72] duration metric: took 1.069135698s to wait for apiserver process to appear ...
	I0425 20:03:13.037542   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:13.037568   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:13.038540   72220 api_server.go:269] stopped: https://192.168.72.142:8443/healthz: Get "https://192.168.72.142:8443/healthz": dial tcp 192.168.72.142:8443: connect: connection refused
	I0425 20:03:13.537713   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:12.944206   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetIP
	I0425 20:03:12.947412   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.947822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:12.947852   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:12.948086   72304 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:12.953504   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:12.969171   72304 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:12.969344   72304 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:12.969402   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:13.016509   72304 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:13.016585   72304 ssh_runner.go:195] Run: which lz4
	I0425 20:03:13.022023   72304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:13.027861   72304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:13.027896   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:14.913405   72304 crio.go:462] duration metric: took 1.891428846s to copy over tarball
	I0425 20:03:14.913466   72304 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:12.659136   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting to get IP...
	I0425 20:03:12.660227   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.660770   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.660843   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.660724   73691 retry.go:31] will retry after 234.96602ms: waiting for machine to come up
	I0425 20:03:12.897395   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:12.897966   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:12.897993   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:12.897913   73691 retry.go:31] will retry after 387.692223ms: waiting for machine to come up
	I0425 20:03:13.287742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.288414   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.288443   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.288397   73691 retry.go:31] will retry after 461.897892ms: waiting for machine to come up
	I0425 20:03:13.752061   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:13.752574   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:13.752603   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:13.752513   73691 retry.go:31] will retry after 452.347315ms: waiting for machine to come up
	I0425 20:03:14.206275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.206684   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.206708   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.206629   73691 retry.go:31] will retry after 466.12355ms: waiting for machine to come up
	I0425 20:03:14.674265   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:14.674788   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:14.674818   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:14.674735   73691 retry.go:31] will retry after 697.70071ms: waiting for machine to come up
	I0425 20:03:15.373862   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:15.374297   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:15.374325   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:15.374252   73691 retry.go:31] will retry after 835.73273ms: waiting for machine to come up
	I0425 20:03:16.211394   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:16.211870   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:16.211902   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:16.211815   73691 retry.go:31] will retry after 1.26739043s: waiting for machine to come up
	I0425 20:03:16.441793   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.441829   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.441848   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.506023   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:16.506057   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:16.538293   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:16.544891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:16.544925   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.038519   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.049842   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.049883   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:17.538420   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:17.545891   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:17.545929   72220 api_server.go:103] status: https://192.168.72.142:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:18.038192   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:03:18.042957   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:03:18.063131   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:18.063171   72220 api_server.go:131] duration metric: took 5.025619242s to wait for apiserver health ...
	I0425 20:03:18.063182   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:03:18.063192   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:18.405047   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:18.552639   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:18.565507   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:18.591534   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:17.662135   72304 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.748640149s)
	I0425 20:03:17.662171   72304 crio.go:469] duration metric: took 2.748741671s to extract the tarball
	I0425 20:03:17.662184   72304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:17.706288   72304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:17.773537   72304 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:03:17.773565   72304 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:03:17.773575   72304 kubeadm.go:928] updating node { 192.168.39.123 8444 v1.30.0 crio true true} ...
	I0425 20:03:17.773709   72304 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-142196 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:17.773799   72304 ssh_runner.go:195] Run: crio config
	I0425 20:03:17.836354   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:17.836379   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:17.836391   72304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:17.836411   72304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-142196 NodeName:default-k8s-diff-port-142196 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:03:17.836545   72304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-142196"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:17.836599   72304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:03:17.848441   72304 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:17.848506   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:17.860320   72304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0425 20:03:17.885528   72304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:17.905701   72304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0425 20:03:17.925064   72304 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:17.930085   72304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:17.944507   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:18.108208   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:18.134428   72304 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196 for IP: 192.168.39.123
	I0425 20:03:18.134456   72304 certs.go:194] generating shared ca certs ...
	I0425 20:03:18.134479   72304 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:18.134672   72304 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:18.134745   72304 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:18.134761   72304 certs.go:256] generating profile certs ...
	I0425 20:03:18.134870   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/client.key
	I0425 20:03:18.245553   72304 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key.1fb61bcb
	I0425 20:03:18.245666   72304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key
	I0425 20:03:18.245833   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:18.245880   72304 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:18.245894   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:18.245934   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:18.245964   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:18.245997   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:18.246058   72304 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:18.246994   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:18.293000   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:18.322296   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:18.358060   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:18.390999   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0425 20:03:18.420333   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:18.450050   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:18.477983   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/default-k8s-diff-port-142196/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:18.506030   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:18.538394   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:18.574361   72304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:18.610827   72304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:18.634141   72304 ssh_runner.go:195] Run: openssl version
	I0425 20:03:18.640647   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:18.653988   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659400   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.659458   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:18.665868   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:18.679247   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:18.692272   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697356   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.697410   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:18.703694   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:18.716412   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:18.733362   72304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739598   72304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.739651   72304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:18.748175   72304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:18.764492   72304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:18.770594   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:18.777414   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:18.784614   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:18.793453   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:18.800721   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:18.807982   72304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:18.814836   72304 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-142196 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-142196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:18.814942   72304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:18.814992   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.864771   72304 cri.go:89] found id: ""
	I0425 20:03:18.864834   72304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:18.878200   72304 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:18.878238   72304 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:18.878245   72304 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:18.878305   72304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:18.892071   72304 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:18.892973   72304 kubeconfig.go:125] found "default-k8s-diff-port-142196" server: "https://192.168.39.123:8444"
	I0425 20:03:18.894860   72304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:18.907959   72304 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.123
	I0425 20:03:18.907989   72304 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:18.907998   72304 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:18.908045   72304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:18.951245   72304 cri.go:89] found id: ""
	I0425 20:03:18.951311   72304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:18.980033   72304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:18.995453   72304 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:18.995473   72304 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:18.995524   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0425 20:03:19.007409   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:19.007470   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:19.019782   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0425 20:03:19.031410   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:19.031493   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:19.043439   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.055936   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:19.055999   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:19.067986   72304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0425 20:03:19.080785   72304 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:19.080869   72304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:19.092802   72304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:19.105024   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.240077   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.259510   72304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.019382485s)
	I0425 20:03:20.259544   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.489833   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.599319   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:20.784451   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:20.784606   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.284759   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:17.480654   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:17.481045   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:17.481094   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:17.481007   73691 retry.go:31] will retry after 1.238487953s: waiting for machine to come up
	I0425 20:03:18.720512   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:18.720940   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:18.720965   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:18.720902   73691 retry.go:31] will retry after 2.277078909s: waiting for machine to come up
	I0425 20:03:20.999749   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:21.000275   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:21.000305   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:21.000223   73691 retry.go:31] will retry after 2.81059851s: waiting for machine to come up
	I0425 20:03:18.940880   72220 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:18.983894   72220 system_pods.go:61] "coredns-7db6d8ff4d-67sp6" [0fc3ee18-e3fe-4f4a-a5bd-4d6e3497bfa3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:18.983953   72220 system_pods.go:61] "etcd-no-preload-744552" [f3768d08-4cc6-42aa-9d1c-b0fd5d6ffed5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:18.983975   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [9d927e1f-4ddb-4b54-b1f1-f5248cb51745] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:18.983984   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [cc71ce6c-22ba-4189-99dc-dd2da6506d37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:18.983993   72220 system_pods.go:61] "kube-proxy-whkbk" [a22b51a9-4854-41f5-bb5a-a81920a09b5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0425 20:03:18.984026   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [5f01cd76-d6b7-4033-9aa9-38cac91965d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:18.984037   72220 system_pods.go:61] "metrics-server-569cc877fc-6n2gd" [03283a78-d44f-4f60-9743-680c18aeace3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:18.984052   72220 system_pods.go:61] "storage-provisioner" [4211811e-85ce-4da2-bc16-16909c26ced7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0425 20:03:18.984064   72220 system_pods.go:74] duration metric: took 392.509163ms to wait for pod list to return data ...
	I0425 20:03:18.984077   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:18.989373   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:18.989405   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:18.989424   72220 node_conditions.go:105] duration metric: took 5.341625ms to run NodePressure ...
	I0425 20:03:18.989446   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:19.809313   72220 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818730   72220 kubeadm.go:733] kubelet initialised
	I0425 20:03:19.818753   72220 kubeadm.go:734] duration metric: took 9.41696ms waiting for restarted kubelet to initialise ...
	I0425 20:03:19.818761   72220 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:19.825762   72220 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:21.834658   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:21.785434   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:21.855046   72304 api_server.go:72] duration metric: took 1.070594042s to wait for apiserver process to appear ...
	I0425 20:03:21.855127   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:03:21.855156   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:21.855709   72304 api_server.go:269] stopped: https://192.168.39.123:8444/healthz: Get "https://192.168.39.123:8444/healthz": dial tcp 192.168.39.123:8444: connect: connection refused
	I0425 20:03:22.355555   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.430068   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.430099   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.430115   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.487089   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:03:24.487124   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:03:24.855301   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:24.861270   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:24.861299   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.356007   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.360802   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.360839   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:25.855336   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:25.861719   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:25.861753   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:23.812963   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:23.813457   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:23.813476   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:23.813429   73691 retry.go:31] will retry after 2.508562986s: waiting for machine to come up
	I0425 20:03:26.323267   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:26.323733   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | unable to find current IP address of domain old-k8s-version-210442 in network mk-old-k8s-version-210442
	I0425 20:03:26.323761   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | I0425 20:03:26.323699   73691 retry.go:31] will retry after 4.475703543s: waiting for machine to come up
	I0425 20:03:26.355254   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.360977   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.361011   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:26.855547   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:26.860178   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:26.860203   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.355819   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.360466   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:03:27.360491   72304 api_server.go:103] status: https://192.168.39.123:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:03:27.856219   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:03:27.861706   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:03:27.868486   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:03:27.868525   72304 api_server.go:131] duration metric: took 6.013385579s to wait for apiserver health ...
	I0425 20:03:27.868536   72304 cni.go:84] Creating CNI manager for ""
	I0425 20:03:27.868544   72304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:27.870534   72304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:03:24.335382   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:24.335415   72220 pod_ready.go:81] duration metric: took 4.509621487s for pod "coredns-7db6d8ff4d-67sp6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:24.335427   72220 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:26.342530   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:28.841444   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:27.871863   72304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:03:27.885767   72304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:03:27.910270   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:03:27.922984   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:03:27.923016   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:03:27.923024   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:03:27.923030   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:03:27.923036   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:03:27.923041   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:03:27.923052   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:03:27.923057   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:03:27.923061   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:03:27.923067   72304 system_pods.go:74] duration metric: took 12.774358ms to wait for pod list to return data ...
	I0425 20:03:27.923073   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:03:27.927553   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:03:27.927582   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:03:27.927596   72304 node_conditions.go:105] duration metric: took 4.517775ms to run NodePressure ...
	I0425 20:03:27.927616   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:28.213013   72304 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217836   72304 kubeadm.go:733] kubelet initialised
	I0425 20:03:28.217860   72304 kubeadm.go:734] duration metric: took 4.809ms waiting for restarted kubelet to initialise ...
	I0425 20:03:28.217869   72304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:28.225122   72304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.229920   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229940   72304 pod_ready.go:81] duration metric: took 4.794976ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.229948   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.229954   72304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.234362   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234380   72304 pod_ready.go:81] duration metric: took 4.417955ms for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.234388   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.234394   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.238885   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238904   72304 pod_ready.go:81] duration metric: took 4.504378ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.238917   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.238924   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.314420   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314446   72304 pod_ready.go:81] duration metric: took 75.511589ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.314457   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.314464   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:28.714128   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714165   72304 pod_ready.go:81] duration metric: took 399.694231ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:28.714178   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-proxy-bqmtp" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:28.714187   72304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.113925   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113958   72304 pod_ready.go:81] duration metric: took 399.760651ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.113971   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.113977   72304 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:29.514107   72304 pod_ready.go:97] node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514132   72304 pod_ready.go:81] duration metric: took 400.147308ms for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:03:29.514142   72304 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-142196" hosting pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:29.514149   72304 pod_ready.go:38] duration metric: took 1.296270699s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:29.514167   72304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:03:29.528766   72304 ops.go:34] apiserver oom_adj: -16
	I0425 20:03:29.528791   72304 kubeadm.go:591] duration metric: took 10.650540723s to restartPrimaryControlPlane
	I0425 20:03:29.528801   72304 kubeadm.go:393] duration metric: took 10.713975851s to StartCluster
	I0425 20:03:29.528816   72304 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.528887   72304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:29.530674   72304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:29.530951   72304 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.123 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:03:29.532792   72304 out.go:177] * Verifying Kubernetes components...
	I0425 20:03:29.531039   72304 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:03:29.531203   72304 config.go:182] Loaded profile config "default-k8s-diff-port-142196": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:29.534328   72304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:29.534349   72304 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534377   72304 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534383   72304 addons.go:243] addon metrics-server should already be in state true
	I0425 20:03:29.534331   72304 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534416   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534441   72304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-142196"
	I0425 20:03:29.534334   72304 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-142196"
	I0425 20:03:29.534536   72304 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.534549   72304 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:03:29.534584   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.534786   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534814   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.534839   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534815   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.534956   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.535000   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.551165   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
	I0425 20:03:29.551680   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552007   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0425 20:03:29.552399   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.552419   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.552445   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.552864   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553003   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.553028   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.553066   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.553409   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.553621   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0425 20:03:29.554006   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.554024   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.554057   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.554555   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.554579   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.554908   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.555432   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.555487   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.557216   72304 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-142196"
	W0425 20:03:29.557238   72304 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:03:29.557267   72304 host.go:66] Checking if "default-k8s-diff-port-142196" exists ...
	I0425 20:03:29.557642   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.557675   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.570559   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0425 20:03:29.571013   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.571538   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.571562   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.571944   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.572152   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.574003   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.576061   72304 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:03:29.575108   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33777
	I0425 20:03:29.575580   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0425 20:03:29.577356   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:03:29.577374   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:03:29.577394   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.577861   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.577964   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.578333   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578356   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578514   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.578543   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.578735   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578909   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.578947   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.579603   72304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:29.579633   72304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:29.580871   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.582436   72304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:29.581297   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.581851   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.583941   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.583971   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.583994   72304 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.584021   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:03:29.584031   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.584044   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.584282   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.584430   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.586538   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.586880   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.586901   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.587119   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.587314   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.587470   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.587560   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.595882   72304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38335
	I0425 20:03:29.596234   72304 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:29.596711   72304 main.go:141] libmachine: Using API Version  1
	I0425 20:03:29.596728   72304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:29.597146   72304 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:29.597321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetState
	I0425 20:03:29.598599   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .DriverName
	I0425 20:03:29.598799   72304 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:29.598811   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:03:29.598822   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHHostname
	I0425 20:03:29.600829   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601125   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:24:a7", ip: ""} in network mk-default-k8s-diff-port-142196: {Iface:virbr3 ExpiryTime:2024-04-25 21:03:03 +0000 UTC Type:0 Mac:52:54:00:10:24:a7 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:default-k8s-diff-port-142196 Clientid:01:52:54:00:10:24:a7}
	I0425 20:03:29.601149   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | domain default-k8s-diff-port-142196 has defined IP address 192.168.39.123 and MAC address 52:54:00:10:24:a7 in network mk-default-k8s-diff-port-142196
	I0425 20:03:29.601321   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHPort
	I0425 20:03:29.601409   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHKeyPath
	I0425 20:03:29.601479   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .GetSSHUsername
	I0425 20:03:29.601537   72304 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/default-k8s-diff-port-142196/id_rsa Username:docker}
	I0425 20:03:29.772228   72304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:29.799159   72304 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:29.893622   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:03:29.893647   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:03:29.895090   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:03:29.919651   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:03:29.919673   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:03:29.929992   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:03:30.004488   72304 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:30.004519   72304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:03:30.061525   72304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.113425632s)
	I0425 20:03:31.043511   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043460   72304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.148338843s)
	I0425 20:03:31.043539   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043587   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043524   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043629   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043675   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.043894   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043910   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043946   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.043953   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.043964   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.043973   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.043992   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044107   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044132   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044159   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044199   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044209   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044219   72304 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-142196"
	I0425 20:03:31.044216   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044226   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044237   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044253   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.044262   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.044542   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044566   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.044662   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.044671   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) DBG | Closing plugin on server side
	I0425 20:03:31.044682   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.052429   72304 main.go:141] libmachine: Making call to close driver server
	I0425 20:03:31.052451   72304 main.go:141] libmachine: (default-k8s-diff-port-142196) Calling .Close
	I0425 20:03:31.052675   72304 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:03:31.052694   72304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:03:31.055680   72304 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0425 20:03:31.057271   72304 addons.go:505] duration metric: took 1.526243989s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0425 20:03:32.187768   71966 start.go:364] duration metric: took 56.585448027s to acquireMachinesLock for "embed-certs-512173"
	I0425 20:03:32.187838   71966 start.go:96] Skipping create...Using existing machine configuration
	I0425 20:03:32.187849   71966 fix.go:54] fixHost starting: 
	I0425 20:03:32.188220   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:03:32.188266   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:03:32.207172   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0425 20:03:32.207627   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:03:32.208170   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:03:32.208196   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:03:32.208493   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:03:32.208700   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:32.208837   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:03:32.210552   71966 fix.go:112] recreateIfNeeded on embed-certs-512173: state=Stopped err=<nil>
	I0425 20:03:32.210577   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	W0425 20:03:32.210741   71966 fix.go:138] unexpected machine state, will restart: <nil>
	I0425 20:03:32.213400   71966 out.go:177] * Restarting existing kvm2 VM for "embed-certs-512173" ...
	I0425 20:03:30.803467   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804014   72712 main.go:141] libmachine: (old-k8s-version-210442) Found IP for machine: 192.168.61.136
	I0425 20:03:30.804041   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserving static IP address...
	I0425 20:03:30.804057   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has current primary IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.804495   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.804535   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | skip adding static IP to network mk-old-k8s-version-210442 - found existing host DHCP lease matching {name: "old-k8s-version-210442", mac: "52:54:00:11:0b:ca", ip: "192.168.61.136"}
	I0425 20:03:30.804562   72712 main.go:141] libmachine: (old-k8s-version-210442) Reserved static IP address: 192.168.61.136
	I0425 20:03:30.804582   72712 main.go:141] libmachine: (old-k8s-version-210442) Waiting for SSH to be available...
	I0425 20:03:30.804599   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Getting to WaitForSSH function...
	I0425 20:03:30.807110   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807533   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.807556   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.807706   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH client type: external
	I0425 20:03:30.807725   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa (-rw-------)
	I0425 20:03:30.807767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:30.807783   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | About to run SSH command:
	I0425 20:03:30.807815   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | exit 0
	I0425 20:03:30.935091   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:30.935445   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetConfigRaw
	I0425 20:03:30.936168   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:30.938767   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939193   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.939246   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.939428   72712 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/config.json ...
	I0425 20:03:30.939630   72712 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:30.939649   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:30.939870   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:30.942320   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942742   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:30.942771   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:30.942923   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:30.943113   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943306   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:30.943468   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:30.943640   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:30.943842   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:30.943854   72712 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:31.052598   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:31.052625   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.052821   72712 buildroot.go:166] provisioning hostname "old-k8s-version-210442"
	I0425 20:03:31.052844   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.053080   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.056324   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056713   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.056745   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.056885   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.057056   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057190   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.057375   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.057549   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.057724   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.057742   72712 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-210442 && echo "old-k8s-version-210442" | sudo tee /etc/hostname
	I0425 20:03:31.188461   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-210442
	
	I0425 20:03:31.188494   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.191628   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192088   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.192117   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.192332   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.192519   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192655   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.192767   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.192944   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.193142   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.193167   72712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-210442' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-210442/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-210442' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:31.317374   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:31.317402   72712 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:31.317436   72712 buildroot.go:174] setting up certificates
	I0425 20:03:31.317447   72712 provision.go:84] configureAuth start
	I0425 20:03:31.317461   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetMachineName
	I0425 20:03:31.317778   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:31.321012   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321388   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.321421   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.321698   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.323976   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324326   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.324354   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.324523   72712 provision.go:143] copyHostCerts
	I0425 20:03:31.324573   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:31.324584   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:31.324656   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:31.324764   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:31.324778   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:31.324807   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:31.324879   72712 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:31.324890   72712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:31.324915   72712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:31.324978   72712 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-210442 san=[127.0.0.1 192.168.61.136 localhost minikube old-k8s-version-210442]
	I0425 20:03:31.410674   72712 provision.go:177] copyRemoteCerts
	I0425 20:03:31.410728   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:31.410755   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.413170   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413449   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.413491   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.413634   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.413832   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.413988   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.414156   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:31.502759   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:31.536662   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0425 20:03:31.565106   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:31.593254   72712 provision.go:87] duration metric: took 275.793443ms to configureAuth
	I0425 20:03:31.593287   72712 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:31.593621   72712 config.go:182] Loaded profile config "old-k8s-version-210442": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0425 20:03:31.593720   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.596515   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.596827   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.596859   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.597057   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.597287   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597448   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.597624   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.597775   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:31.597927   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:31.597942   72712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:31.925149   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:31.925182   72712 machine.go:97] duration metric: took 985.540626ms to provisionDockerMachine
	I0425 20:03:31.925199   72712 start.go:293] postStartSetup for "old-k8s-version-210442" (driver="kvm2")
	I0425 20:03:31.925211   72712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:31.925258   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:31.925560   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:31.925596   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:31.928532   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.928982   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:31.929013   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:31.929232   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:31.929458   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:31.929637   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:31.929787   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.023009   72712 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:32.029391   72712 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:32.029426   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:32.029508   72712 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:32.029576   72712 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:32.029664   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:32.046596   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:32.077323   72712 start.go:296] duration metric: took 152.112632ms for postStartSetup
	I0425 20:03:32.077396   72712 fix.go:56] duration metric: took 20.821829703s for fixHost
	I0425 20:03:32.077425   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.080136   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080477   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.080526   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.080636   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.080836   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081067   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.081283   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.081493   72712 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:32.081695   72712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.136 22 <nil> <nil>}
	I0425 20:03:32.081711   72712 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:32.187617   72712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075412.163072845
	
	I0425 20:03:32.187642   72712 fix.go:216] guest clock: 1714075412.163072845
	I0425 20:03:32.187652   72712 fix.go:229] Guest: 2024-04-25 20:03:32.163072845 +0000 UTC Remote: 2024-04-25 20:03:32.07740605 +0000 UTC m=+254.767943919 (delta=85.666795ms)
	I0425 20:03:32.187675   72712 fix.go:200] guest clock delta is within tolerance: 85.666795ms
	I0425 20:03:32.187682   72712 start.go:83] releasing machines lock for "old-k8s-version-210442", held for 20.932154384s
	I0425 20:03:32.187709   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.187998   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:32.190538   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.190898   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.190932   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.191077   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191817   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.191996   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .DriverName
	I0425 20:03:32.192076   72712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:32.192116   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.192208   72712 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:32.192230   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHHostname
	I0425 20:03:32.194821   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.194988   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195191   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195212   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195334   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:32.195368   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:32.195500   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195673   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195677   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHPort
	I0425 20:03:32.195847   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHKeyPath
	I0425 20:03:32.195866   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196063   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.196083   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetSSHUsername
	I0425 20:03:32.196219   72712 sshutil.go:53] new ssh client: &{IP:192.168.61.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/old-k8s-version-210442/id_rsa Username:docker}
	I0425 20:03:32.276462   72712 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:32.300979   72712 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:30.842282   72220 pod_ready.go:102] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:32.843750   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.843779   72220 pod_ready.go:81] duration metric: took 8.508343704s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.843791   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850293   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.850316   72220 pod_ready.go:81] duration metric: took 6.517764ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.850327   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855621   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.855657   72220 pod_ready.go:81] duration metric: took 5.31225ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.855671   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860450   72220 pod_ready.go:92] pod "kube-proxy-whkbk" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.860483   72220 pod_ready.go:81] duration metric: took 4.797706ms for pod "kube-proxy-whkbk" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.860505   72220 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865268   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:32.865286   72220 pod_ready.go:81] duration metric: took 4.774354ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.865294   72220 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:32.458446   72712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:32.465434   72712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:32.465518   72712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:32.486929   72712 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:32.486954   72712 start.go:494] detecting cgroup driver to use...
	I0425 20:03:32.487019   72712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:32.509425   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:32.530999   72712 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:32.531059   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:32.547280   72712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:32.563594   72712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:32.699207   72712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:32.875013   72712 docker.go:233] disabling docker service ...
	I0425 20:03:32.875096   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:32.897149   72712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:32.916105   72712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:33.071143   72712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:33.231529   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:33.252919   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:33.277388   72712 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0425 20:03:33.277457   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.290889   72712 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:33.290953   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.305488   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.319263   72712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:33.332961   72712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:33.354086   72712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:33.373431   72712 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:33.373517   72712 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:33.398458   72712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:33.418683   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:33.595555   72712 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:33.808015   72712 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:33.810391   72712 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:33.817593   72712 start.go:562] Will wait 60s for crictl version
	I0425 20:03:33.817646   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:33.823381   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:33.866310   72712 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:33.866411   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.905561   72712 ssh_runner.go:195] Run: crio --version
	I0425 20:03:33.952764   72712 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0425 20:03:32.214679   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Start
	I0425 20:03:32.214880   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring networks are active...
	I0425 20:03:32.215746   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network default is active
	I0425 20:03:32.216106   71966 main.go:141] libmachine: (embed-certs-512173) Ensuring network mk-embed-certs-512173 is active
	I0425 20:03:32.216566   71966 main.go:141] libmachine: (embed-certs-512173) Getting domain xml...
	I0425 20:03:32.217397   71966 main.go:141] libmachine: (embed-certs-512173) Creating domain...
	I0425 20:03:33.554665   71966 main.go:141] libmachine: (embed-certs-512173) Waiting to get IP...
	I0425 20:03:33.555670   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.556123   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.556186   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.556089   73884 retry.go:31] will retry after 278.996701ms: waiting for machine to come up
	I0425 20:03:33.836750   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:33.837273   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:33.837301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:33.837244   73884 retry.go:31] will retry after 324.410317ms: waiting for machine to come up
	I0425 20:03:34.163017   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.163490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.163518   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.163457   73884 retry.go:31] will retry after 403.985826ms: waiting for machine to come up
	I0425 20:03:34.568824   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.569364   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.569397   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.569330   73884 retry.go:31] will retry after 427.12179ms: waiting for machine to come up
	I0425 20:03:34.998092   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:34.998684   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:34.998709   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:34.998646   73884 retry.go:31] will retry after 710.71475ms: waiting for machine to come up
	I0425 20:03:35.710643   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:35.711707   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:35.711736   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:35.711616   73884 retry.go:31] will retry after 806.283051ms: waiting for machine to come up
	I0425 20:03:31.803034   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:33.813548   72304 node_ready.go:53] node "default-k8s-diff-port-142196" has status "Ready":"False"
	I0425 20:03:35.304283   72304 node_ready.go:49] node "default-k8s-diff-port-142196" has status "Ready":"True"
	I0425 20:03:35.304311   72304 node_ready.go:38] duration metric: took 5.505123781s for node "default-k8s-diff-port-142196" to be "Ready" ...
	I0425 20:03:35.304323   72304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:03:35.311480   72304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320910   72304 pod_ready.go:92] pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:35.320938   72304 pod_ready.go:81] duration metric: took 9.425507ms for pod "coredns-7db6d8ff4d-z6ls5" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:35.320953   72304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:33.954161   72712 main.go:141] libmachine: (old-k8s-version-210442) Calling .GetIP
	I0425 20:03:33.957316   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.957778   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:0b:ca", ip: ""} in network mk-old-k8s-version-210442: {Iface:virbr4 ExpiryTime:2024-04-25 20:53:07 +0000 UTC Type:0 Mac:52:54:00:11:0b:ca Iaid: IPaddr:192.168.61.136 Prefix:24 Hostname:old-k8s-version-210442 Clientid:01:52:54:00:11:0b:ca}
	I0425 20:03:33.957811   72712 main.go:141] libmachine: (old-k8s-version-210442) DBG | domain old-k8s-version-210442 has defined IP address 192.168.61.136 and MAC address 52:54:00:11:0b:ca in network mk-old-k8s-version-210442
	I0425 20:03:33.958080   72712 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:33.964467   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:33.984277   72712 kubeadm.go:877] updating cluster {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:33.984437   72712 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 20:03:33.984499   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:34.049402   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:34.049479   72712 ssh_runner.go:195] Run: which lz4
	I0425 20:03:34.055519   72712 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:34.061481   72712 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:34.061522   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0425 20:03:36.271646   72712 crio.go:462] duration metric: took 2.216165414s to copy over tarball
	I0425 20:03:36.271722   72712 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:03:34.877483   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:37.373822   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:36.519514   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:36.520052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:36.520085   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:36.519968   73884 retry.go:31] will retry after 990.986618ms: waiting for machine to come up
	I0425 20:03:37.513151   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:37.513636   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:37.513669   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:37.513574   73884 retry.go:31] will retry after 1.371471682s: waiting for machine to come up
	I0425 20:03:38.886926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:38.887491   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:38.887527   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:38.887415   73884 retry.go:31] will retry after 1.633505345s: waiting for machine to come up
	I0425 20:03:40.523438   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:40.523975   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:40.524004   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:40.523926   73884 retry.go:31] will retry after 2.280577933s: waiting for machine to come up
	I0425 20:03:37.330040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.350040   72304 pod_ready.go:102] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:39.894331   72712 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.622580176s)
	I0425 20:03:39.894364   72712 crio.go:469] duration metric: took 3.62268463s to extract the tarball
	I0425 20:03:39.894373   72712 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:03:39.965071   72712 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:40.009534   72712 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0425 20:03:40.009561   72712 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0425 20:03:40.009629   72712 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.009651   72712 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.009677   72712 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.009662   72712 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.009794   72712 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.009920   72712 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.010033   72712 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.010241   72712 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0425 20:03:40.011305   72712 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.011334   72712 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.011346   72712 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.011686   72712 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.012422   72712 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:40.012429   72712 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.012437   72712 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0425 20:03:40.012546   72712 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.143545   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.155203   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.157842   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.158081   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.161210   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.166515   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.181859   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0425 20:03:40.301699   72712 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0425 20:03:40.301759   72712 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.301805   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.379386   72712 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0425 20:03:40.379445   72712 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.379490   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406119   72712 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0425 20:03:40.406231   72712 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.406174   72712 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0425 20:03:40.406338   72712 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.406365   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.406389   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420450   72712 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0425 20:03:40.420495   72712 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.420548   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.420461   72712 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0425 20:03:40.420629   72712 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.420677   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430055   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0425 20:03:40.430110   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0425 20:03:40.430232   72712 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0425 20:03:40.430263   72712 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0425 20:03:40.430274   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0425 20:03:40.430277   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0425 20:03:40.430303   72712 ssh_runner.go:195] Run: which crictl
	I0425 20:03:40.430326   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0425 20:03:40.430389   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0425 20:03:40.582980   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0425 20:03:40.583094   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0425 20:03:40.587500   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0425 20:03:40.587564   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0425 20:03:40.587579   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0425 20:03:40.587650   72712 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0425 20:03:40.587697   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0425 20:03:40.625942   72712 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0425 20:03:40.941957   72712 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:03:41.096086   72712 cache_images.go:92] duration metric: took 1.086507707s to LoadCachedImages
	W0425 20:03:41.096249   72712 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18757-6355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0425 20:03:41.096279   72712 kubeadm.go:928] updating node { 192.168.61.136 8443 v1.20.0 crio true true} ...
	I0425 20:03:41.096415   72712 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-210442 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:03:41.096509   72712 ssh_runner.go:195] Run: crio config
	I0425 20:03:41.169311   72712 cni.go:84] Creating CNI manager for ""
	I0425 20:03:41.169341   72712 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:03:41.169357   72712 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:03:41.169397   72712 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.136 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-210442 NodeName:old-k8s-version-210442 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0425 20:03:41.169570   72712 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-210442"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:03:41.169639   72712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0425 20:03:41.182191   72712 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:03:41.182283   72712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:03:41.193546   72712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0425 20:03:41.218220   72712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:03:41.238647   72712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0425 20:03:41.259040   72712 ssh_runner.go:195] Run: grep 192.168.61.136	control-plane.minikube.internal$ /etc/hosts
	I0425 20:03:41.263603   72712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:41.278007   72712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:41.425587   72712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:03:41.450990   72712 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442 for IP: 192.168.61.136
	I0425 20:03:41.451013   72712 certs.go:194] generating shared ca certs ...
	I0425 20:03:41.451034   72712 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:41.451225   72712 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:03:41.451307   72712 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:03:41.451323   72712 certs.go:256] generating profile certs ...
	I0425 20:03:41.451449   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/client.key
	I0425 20:03:41.451528   72712 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key.1533c9ac
	I0425 20:03:41.451587   72712 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key
	I0425 20:03:41.451789   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:03:41.451860   72712 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:03:41.451880   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:03:41.451915   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:03:41.451945   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:03:41.451968   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:03:41.452023   72712 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:41.452870   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:03:41.510467   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:03:41.555595   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:41.606059   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:03:41.648206   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0425 20:03:41.690090   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:03:41.727674   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:03:41.766537   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/old-k8s-version-210442/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:03:41.799524   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:03:41.828668   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:03:41.860964   72712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:03:41.890272   72712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:03:41.911787   72712 ssh_runner.go:195] Run: openssl version
	I0425 20:03:41.918926   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:03:41.933194   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.938995   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.939060   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:03:41.945934   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:03:41.959859   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:03:41.974906   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.980931   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.981006   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:03:41.987789   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:03:42.002455   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:03:42.016797   72712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023789   72712 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.023853   72712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:03:42.033189   72712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:03:42.047467   72712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:03:42.053552   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:03:42.063130   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:03:42.070290   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:03:42.079527   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:03:42.087983   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:03:42.096658   72712 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:03:42.103477   72712 kubeadm.go:391] StartCluster: {Name:old-k8s-version-210442 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-210442 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.136 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:03:42.103596   72712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:03:42.103649   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.155980   72712 cri.go:89] found id: ""
	I0425 20:03:42.156085   72712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:03:42.172499   72712 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:03:42.172525   72712 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:03:42.172532   72712 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:03:42.172580   72712 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:03:42.187864   72712 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:03:42.188948   72712 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-210442" does not appear in /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:03:42.189659   72712 kubeconfig.go:62] /home/jenkins/minikube-integration/18757-6355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-210442" cluster setting kubeconfig missing "old-k8s-version-210442" context setting]
	I0425 20:03:42.190635   72712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:03:42.192402   72712 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:03:42.207284   72712 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.136
	I0425 20:03:42.207318   72712 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:03:42.207329   72712 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:03:42.207403   72712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:03:42.251184   72712 cri.go:89] found id: ""
	I0425 20:03:42.251257   72712 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:03:42.271727   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:03:42.289161   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:03:42.289184   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:03:42.289237   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:03:42.302492   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:03:42.302588   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:03:42.317790   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:03:42.329940   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:03:42.330002   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:03:42.342772   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:03:39.375028   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:41.871821   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.805640   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:42.806121   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:42.806148   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:42.806072   73884 retry.go:31] will retry after 2.588054599s: waiting for machine to come up
	I0425 20:03:45.395282   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:45.395712   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:45.395759   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:45.395662   73884 retry.go:31] will retry after 3.473643777s: waiting for machine to come up
	I0425 20:03:41.329479   72304 pod_ready.go:92] pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.329511   72304 pod_ready.go:81] duration metric: took 6.008549199s for pod "etcd-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.329523   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335660   72304 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.335688   72304 pod_ready.go:81] duration metric: took 6.15557ms for pod "kube-apiserver-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.335700   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341409   72304 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.341433   72304 pod_ready.go:81] duration metric: took 5.723469ms for pod "kube-controller-manager-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.341446   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347145   72304 pod_ready.go:92] pod "kube-proxy-bqmtp" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.347167   72304 pod_ready.go:81] duration metric: took 5.713095ms for pod "kube-proxy-bqmtp" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.347179   72304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376913   72304 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace has status "Ready":"True"
	I0425 20:03:41.376939   72304 pod_ready.go:81] duration metric: took 29.751827ms for pod "kube-scheduler-default-k8s-diff-port-142196" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:41.376951   72304 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	I0425 20:03:43.383378   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:45.884869   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:42.356480   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:03:42.357280   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:03:42.370403   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:03:42.384245   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:03:42.384332   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:03:42.398271   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:03:42.412361   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:42.575076   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.186458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.480114   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.594128   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:03:43.707129   72712 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:03:43.707221   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.207406   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:44.707733   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.208100   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:45.708041   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.207966   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:46.707255   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:47.207754   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:43.873747   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:46.374439   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:48.871928   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:48.872457   71966 main.go:141] libmachine: (embed-certs-512173) DBG | unable to find current IP address of domain embed-certs-512173 in network mk-embed-certs-512173
	I0425 20:03:48.872490   71966 main.go:141] libmachine: (embed-certs-512173) DBG | I0425 20:03:48.872393   73884 retry.go:31] will retry after 4.148424216s: waiting for machine to come up
	I0425 20:03:48.384599   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.883246   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:47.707730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.208213   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.707685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.207879   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:49.707914   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.208278   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:50.707691   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.207600   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:51.707365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:52.207931   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:48.872282   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:50.872356   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.874452   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:53.022813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023343   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has current primary IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.023367   71966 main.go:141] libmachine: (embed-certs-512173) Found IP for machine: 192.168.50.7
	I0425 20:03:53.023381   71966 main.go:141] libmachine: (embed-certs-512173) Reserving static IP address...
	I0425 20:03:53.023750   71966 main.go:141] libmachine: (embed-certs-512173) Reserved static IP address: 192.168.50.7
	I0425 20:03:53.023770   71966 main.go:141] libmachine: (embed-certs-512173) Waiting for SSH to be available...
	I0425 20:03:53.023791   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.023827   71966 main.go:141] libmachine: (embed-certs-512173) DBG | skip adding static IP to network mk-embed-certs-512173 - found existing host DHCP lease matching {name: "embed-certs-512173", mac: "52:54:00:31:60:a2", ip: "192.168.50.7"}
	I0425 20:03:53.023848   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Getting to WaitForSSH function...
	I0425 20:03:53.025753   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.026132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.026244   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH client type: external
	I0425 20:03:53.026268   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Using SSH private key: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa (-rw-------)
	I0425 20:03:53.026301   71966 main.go:141] libmachine: (embed-certs-512173) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0425 20:03:53.026313   71966 main.go:141] libmachine: (embed-certs-512173) DBG | About to run SSH command:
	I0425 20:03:53.026325   71966 main.go:141] libmachine: (embed-certs-512173) DBG | exit 0
	I0425 20:03:53.158487   71966 main.go:141] libmachine: (embed-certs-512173) DBG | SSH cmd err, output: <nil>: 
	I0425 20:03:53.158846   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetConfigRaw
	I0425 20:03:53.159567   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.161881   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162200   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.162257   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.162492   71966 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/config.json ...
	I0425 20:03:53.162658   71966 machine.go:94] provisionDockerMachine start ...
	I0425 20:03:53.162675   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:53.162875   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.164797   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165108   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.165140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.165256   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.165402   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165561   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.165659   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.165815   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.165989   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.166002   71966 main.go:141] libmachine: About to run SSH command:
	hostname
	I0425 20:03:53.283185   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0425 20:03:53.283219   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283455   71966 buildroot.go:166] provisioning hostname "embed-certs-512173"
	I0425 20:03:53.283480   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.283690   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.286427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286813   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.286843   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.286969   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.287164   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287350   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.287490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.287641   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.287881   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.287904   71966 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-512173 && echo "embed-certs-512173" | sudo tee /etc/hostname
	I0425 20:03:53.423037   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-512173
	
	I0425 20:03:53.423067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.425749   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.426140   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.426329   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.426501   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426640   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.426747   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.426866   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.427015   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.427083   71966 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-512173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-512173/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-512173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0425 20:03:53.553687   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0425 20:03:53.553715   71966 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18757-6355/.minikube CaCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18757-6355/.minikube}
	I0425 20:03:53.553749   71966 buildroot.go:174] setting up certificates
	I0425 20:03:53.553758   71966 provision.go:84] configureAuth start
	I0425 20:03:53.553775   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetMachineName
	I0425 20:03:53.554053   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:53.556655   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.556995   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.557034   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.557121   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.559341   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559692   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.559718   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.559897   71966 provision.go:143] copyHostCerts
	I0425 20:03:53.559970   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem, removing ...
	I0425 20:03:53.559984   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem
	I0425 20:03:53.560049   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/ca.pem (1082 bytes)
	I0425 20:03:53.560129   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem, removing ...
	I0425 20:03:53.560136   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem
	I0425 20:03:53.560155   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/cert.pem (1123 bytes)
	I0425 20:03:53.560203   71966 exec_runner.go:144] found /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem, removing ...
	I0425 20:03:53.560214   71966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem
	I0425 20:03:53.560233   71966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18757-6355/.minikube/key.pem (1679 bytes)
	I0425 20:03:53.560278   71966 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-512173 san=[127.0.0.1 192.168.50.7 embed-certs-512173 localhost minikube]
	I0425 20:03:53.621714   71966 provision.go:177] copyRemoteCerts
	I0425 20:03:53.621777   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0425 20:03:53.621804   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.624556   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.624883   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.624914   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.625128   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.625324   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.625458   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.625602   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:53.715477   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0425 20:03:53.743782   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0425 20:03:53.771468   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0425 20:03:53.798701   71966 provision.go:87] duration metric: took 244.92871ms to configureAuth
	I0425 20:03:53.798726   71966 buildroot.go:189] setting minikube options for container-runtime
	I0425 20:03:53.798922   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:03:53.798991   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:53.801607   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.801946   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:53.801972   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:53.802187   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:53.802373   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802490   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:53.802628   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:53.802833   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:53.802986   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:53.803000   71966 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0425 20:03:54.117164   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0425 20:03:54.117193   71966 machine.go:97] duration metric: took 954.522384ms to provisionDockerMachine
	I0425 20:03:54.117207   71966 start.go:293] postStartSetup for "embed-certs-512173" (driver="kvm2")
	I0425 20:03:54.117219   71966 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0425 20:03:54.117238   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.117558   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0425 20:03:54.117591   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.120060   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120427   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.120454   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.120575   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.120761   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.120891   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.121002   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.209919   71966 ssh_runner.go:195] Run: cat /etc/os-release
	I0425 20:03:54.215633   71966 info.go:137] Remote host: Buildroot 2023.02.9
	I0425 20:03:54.215663   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/addons for local assets ...
	I0425 20:03:54.215747   71966 filesync.go:126] Scanning /home/jenkins/minikube-integration/18757-6355/.minikube/files for local assets ...
	I0425 20:03:54.215860   71966 filesync.go:149] local asset: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem -> 136822.pem in /etc/ssl/certs
	I0425 20:03:54.215996   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0425 20:03:54.227250   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:03:54.257169   71966 start.go:296] duration metric: took 139.949813ms for postStartSetup
	I0425 20:03:54.257212   71966 fix.go:56] duration metric: took 22.069363419s for fixHost
	I0425 20:03:54.257237   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.260255   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260588   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.260613   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.260731   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.260928   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261099   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.261266   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.261447   71966 main.go:141] libmachine: Using SSH client type: native
	I0425 20:03:54.261644   71966 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0425 20:03:54.261655   71966 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0425 20:03:54.376222   71966 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714075434.352338373
	
	I0425 20:03:54.376245   71966 fix.go:216] guest clock: 1714075434.352338373
	I0425 20:03:54.376255   71966 fix.go:229] Guest: 2024-04-25 20:03:54.352338373 +0000 UTC Remote: 2024-04-25 20:03:54.257217658 +0000 UTC m=+368.446046405 (delta=95.120715ms)
	I0425 20:03:54.376287   71966 fix.go:200] guest clock delta is within tolerance: 95.120715ms
	I0425 20:03:54.376295   71966 start.go:83] releasing machines lock for "embed-certs-512173", held for 22.188484297s
	I0425 20:03:54.376317   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.376600   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:54.379217   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379646   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.379678   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.379869   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380436   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380633   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:03:54.380729   71966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0425 20:03:54.380779   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.380857   71966 ssh_runner.go:195] Run: cat /version.json
	I0425 20:03:54.380880   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:03:54.383698   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384052   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384081   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384110   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384283   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384471   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.384610   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.384647   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:54.384683   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:54.384781   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.384821   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:03:54.384982   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:03:54.385131   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:03:54.385330   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:03:54.468506   71966 ssh_runner.go:195] Run: systemctl --version
	I0425 20:03:54.493995   71966 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0425 20:03:54.642719   71966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0425 20:03:54.649565   71966 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0425 20:03:54.649632   71966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0425 20:03:54.667526   71966 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0425 20:03:54.667546   71966 start.go:494] detecting cgroup driver to use...
	I0425 20:03:54.667596   71966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0425 20:03:54.685384   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0425 20:03:54.701852   71966 docker.go:217] disabling cri-docker service (if available) ...
	I0425 20:03:54.701905   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0425 20:03:54.718559   71966 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0425 20:03:54.734874   71966 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0425 20:03:54.858325   71966 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0425 20:03:55.045158   71966 docker.go:233] disabling docker service ...
	I0425 20:03:55.045219   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0425 20:03:55.061668   71966 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0425 20:03:55.076486   71966 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0425 20:03:55.207287   71966 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0425 20:03:55.352537   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0425 20:03:55.369470   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0425 20:03:55.392638   71966 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0425 20:03:55.392718   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.404590   71966 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0425 20:03:55.404655   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.416129   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.427176   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.438632   71966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0425 20:03:55.450725   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.462912   71966 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.485340   71966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0425 20:03:55.498134   71966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0425 20:03:55.508378   71966 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0425 20:03:55.508451   71966 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0425 20:03:55.523073   71966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0425 20:03:55.533901   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:03:55.666845   71966 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0425 20:03:55.828131   71966 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0425 20:03:55.828199   71966 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0425 20:03:55.833768   71966 start.go:562] Will wait 60s for crictl version
	I0425 20:03:55.833824   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:03:55.838000   71966 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0425 20:03:55.881652   71966 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0425 20:03:55.881753   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.917675   71966 ssh_runner.go:195] Run: crio --version
	I0425 20:03:55.953046   71966 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0425 20:03:52.884447   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:54.884538   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:52.707459   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.208241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:53.707431   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.207538   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:54.707289   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.207319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.707625   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.207562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:56.708324   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:57.207348   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:55.373713   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.374476   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:55.954484   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetIP
	I0425 20:03:55.957214   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957611   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:03:55.957638   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:03:55.957832   71966 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0425 20:03:55.962420   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:03:55.976512   71966 kubeadm.go:877] updating cluster {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0425 20:03:55.976626   71966 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 20:03:55.976694   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:03:56.019881   71966 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0425 20:03:56.019942   71966 ssh_runner.go:195] Run: which lz4
	I0425 20:03:56.024524   71966 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0425 20:03:56.029297   71966 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0425 20:03:56.029339   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0425 20:03:57.736602   71966 crio.go:462] duration metric: took 1.712117844s to copy over tarball
	I0425 20:03:57.736666   71966 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0425 20:04:00.331696   71966 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.594977915s)
	I0425 20:04:00.331739   71966 crio.go:469] duration metric: took 2.595109768s to extract the tarball
	I0425 20:04:00.331751   71966 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0425 20:04:00.375437   71966 ssh_runner.go:195] Run: sudo crictl images --output json
	I0425 20:04:00.430963   71966 crio.go:514] all images are preloaded for cri-o runtime.
	I0425 20:04:00.430987   71966 cache_images.go:84] Images are preloaded, skipping loading
	I0425 20:04:00.430994   71966 kubeadm.go:928] updating node { 192.168.50.7 8443 v1.30.0 crio true true} ...
	I0425 20:04:00.431081   71966 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-512173 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0425 20:04:00.431154   71966 ssh_runner.go:195] Run: crio config
	I0425 20:04:00.487082   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:00.487106   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:00.487117   71966 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0425 20:04:00.487135   71966 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-512173 NodeName:embed-certs-512173 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0425 20:04:00.487306   71966 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-512173"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0425 20:04:00.487378   71966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0425 20:04:00.498819   71966 binaries.go:44] Found k8s binaries, skipping transfer
	I0425 20:04:00.498881   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0425 20:04:00.509212   71966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0425 20:04:00.527703   71966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0425 20:04:00.546867   71966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0425 20:04:00.566302   71966 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0425 20:04:00.570629   71966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0425 20:04:00.584123   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:00.717589   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:00.743108   71966 certs.go:68] Setting up /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173 for IP: 192.168.50.7
	I0425 20:04:00.743173   71966 certs.go:194] generating shared ca certs ...
	I0425 20:04:00.743201   71966 certs.go:226] acquiring lock for ca certs: {Name:mk3bbe1de7b9dbd80b3410882890f16cc0d1315a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:00.743397   71966 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key
	I0425 20:04:00.743462   71966 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key
	I0425 20:04:00.743480   71966 certs.go:256] generating profile certs ...
	I0425 20:04:00.743644   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/client.key
	I0425 20:04:00.743729   71966 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key.4a0c231f
	I0425 20:04:00.743789   71966 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key
	I0425 20:04:00.743964   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem (1338 bytes)
	W0425 20:04:00.744019   71966 certs.go:480] ignoring /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682_empty.pem, impossibly tiny 0 bytes
	I0425 20:04:00.744033   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca-key.pem (1675 bytes)
	I0425 20:04:00.744064   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/ca.pem (1082 bytes)
	I0425 20:04:00.744093   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/cert.pem (1123 bytes)
	I0425 20:04:00.744117   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/certs/key.pem (1679 bytes)
	I0425 20:04:00.744158   71966 certs.go:484] found cert: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem (1708 bytes)
	I0425 20:04:00.745130   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0425 20:04:00.797856   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0425 20:04:00.848631   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0425 20:03:56.885355   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:58.885857   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:03:57.707868   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.208319   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:58.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.207410   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.707562   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.208006   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:00.708245   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.208178   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:01.707239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:02.207926   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:03:59.873851   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.372919   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:00.877499   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0425 20:04:01.210716   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0425 20:04:01.239562   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0425 20:04:01.267356   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0425 20:04:01.295649   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/embed-certs-512173/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0425 20:04:01.323739   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0425 20:04:01.350440   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/certs/13682.pem --> /usr/share/ca-certificates/13682.pem (1338 bytes)
	I0425 20:04:01.379693   71966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/ssl/certs/136822.pem --> /usr/share/ca-certificates/136822.pem (1708 bytes)
	I0425 20:04:01.409347   71966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0425 20:04:01.429857   71966 ssh_runner.go:195] Run: openssl version
	I0425 20:04:01.437636   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0425 20:04:01.449656   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455022   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 25 18:32 /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.455074   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0425 20:04:01.461442   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0425 20:04:01.473323   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13682.pem && ln -fs /usr/share/ca-certificates/13682.pem /etc/ssl/certs/13682.pem"
	I0425 20:04:01.485988   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491661   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 25 18:45 /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.491719   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13682.pem
	I0425 20:04:01.498567   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13682.pem /etc/ssl/certs/51391683.0"
	I0425 20:04:01.510983   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136822.pem && ln -fs /usr/share/ca-certificates/136822.pem /etc/ssl/certs/136822.pem"
	I0425 20:04:01.523098   71966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528619   71966 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 25 18:45 /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.528667   71966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136822.pem
	I0425 20:04:01.535129   71966 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/136822.pem /etc/ssl/certs/3ec20f2e.0"
	I0425 20:04:01.546668   71966 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0425 20:04:01.552076   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0425 20:04:01.558928   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0425 20:04:01.566406   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0425 20:04:01.574761   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0425 20:04:01.581250   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0425 20:04:01.588506   71966 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0425 20:04:01.594844   71966 kubeadm.go:391] StartCluster: {Name:embed-certs-512173 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-512173 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 20:04:01.594917   71966 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0425 20:04:01.594978   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.648050   71966 cri.go:89] found id: ""
	I0425 20:04:01.648155   71966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0425 20:04:01.664291   71966 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0425 20:04:01.664318   71966 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0425 20:04:01.664325   71966 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0425 20:04:01.664387   71966 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0425 20:04:01.678686   71966 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0425 20:04:01.680096   71966 kubeconfig.go:125] found "embed-certs-512173" server: "https://192.168.50.7:8443"
	I0425 20:04:01.682375   71966 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0425 20:04:01.699073   71966 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.7
	I0425 20:04:01.699109   71966 kubeadm.go:1154] stopping kube-system containers ...
	I0425 20:04:01.699122   71966 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0425 20:04:01.699190   71966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0425 20:04:01.744556   71966 cri.go:89] found id: ""
	I0425 20:04:01.744633   71966 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0425 20:04:01.767121   71966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:04:01.778499   71966 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:04:01.778517   71966 kubeadm.go:156] found existing configuration files:
	
	I0425 20:04:01.778575   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:04:01.789171   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:04:01.789242   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:04:01.800000   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:04:01.811015   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:04:01.811078   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:04:01.821752   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.832900   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:04:01.832962   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:04:01.844058   71966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:04:01.854774   71966 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:04:01.854824   71966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:04:01.866086   71966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:04:01.879229   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.180778   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:02.971467   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.202841   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.286951   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:03.412260   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:04:03.412375   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.913176   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.413418   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.443763   71966 api_server.go:72] duration metric: took 1.031501246s to wait for apiserver process to appear ...
	I0425 20:04:04.443796   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:04:04.443816   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:04.444334   71966 api_server.go:269] stopped: https://192.168.50.7:8443/healthz: Get "https://192.168.50.7:8443/healthz": dial tcp 192.168.50.7:8443: connect: connection refused
	I0425 20:04:04.943937   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:01.384590   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:03.885859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:02.707796   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.207913   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:03.708267   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.207491   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.707894   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.207346   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:05.707801   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.208283   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:06.707342   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:07.208190   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:04.381611   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:06.875270   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.463721   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.463767   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.463785   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.479254   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0425 20:04:07.479283   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0425 20:04:07.944812   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:07.949683   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:07.949710   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.444237   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.451663   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.451706   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:08.944231   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:08.949165   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:08.949194   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.444776   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.449703   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.449732   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:09.943865   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:09.948474   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:09.948509   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.444040   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.448740   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0425 20:04:10.448781   71966 api_server.go:103] status: https://192.168.50.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0425 20:04:10.944487   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:04:10.950181   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:04:10.957455   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:04:10.957479   71966 api_server.go:131] duration metric: took 6.513676295s to wait for apiserver health ...
	I0425 20:04:10.957487   71966 cni.go:84] Creating CNI manager for ""
	I0425 20:04:10.957496   71966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:04:10.959196   71966 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:04:06.384595   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:08.883972   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:07.707466   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.207370   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:08.707951   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.207604   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:09.708057   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.207422   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.707391   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.207510   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:11.707828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:12.207519   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:10.960795   71966 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:04:10.977005   71966 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:04:11.001393   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:04:11.021408   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:04:11.021439   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0425 20:04:11.021453   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0425 20:04:11.021466   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0425 20:04:11.021478   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0425 20:04:11.021495   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:04:11.021502   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0425 20:04:11.021513   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:04:11.021521   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:04:11.021533   71966 system_pods.go:74] duration metric: took 20.120592ms to wait for pod list to return data ...
	I0425 20:04:11.021540   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:04:11.025328   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:04:11.025360   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:04:11.025374   71966 node_conditions.go:105] duration metric: took 3.826846ms to run NodePressure ...
	I0425 20:04:11.025394   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0425 20:04:11.304673   71966 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309061   71966 kubeadm.go:733] kubelet initialised
	I0425 20:04:11.309082   71966 kubeadm.go:734] duration metric: took 4.385794ms waiting for restarted kubelet to initialise ...
	I0425 20:04:11.309089   71966 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:11.314583   71966 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.319490   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319515   71966 pod_ready.go:81] duration metric: took 4.900118ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.319524   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.319534   71966 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.324084   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324101   71966 pod_ready.go:81] duration metric: took 4.557199ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.324108   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "etcd-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.324113   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.328151   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328167   71966 pod_ready.go:81] duration metric: took 4.047894ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.328174   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.328184   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.404944   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.404982   71966 pod_ready.go:81] duration metric: took 76.789573ms for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.404997   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.405006   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:11.805191   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805221   71966 pod_ready.go:81] duration metric: took 400.202708ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:11.805238   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-proxy-8247p" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.805248   71966 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.205817   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205847   71966 pod_ready.go:81] duration metric: took 400.591033ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.205858   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.205866   71966 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:12.605705   71966 pod_ready.go:97] node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605736   71966 pod_ready.go:81] duration metric: took 399.849241ms for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:04:12.605745   71966 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-512173" hosting pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:12.605754   71966 pod_ready.go:38] duration metric: took 1.29665644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:12.605776   71966 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:04:12.620368   71966 ops.go:34] apiserver oom_adj: -16
	I0425 20:04:12.620397   71966 kubeadm.go:591] duration metric: took 10.956065292s to restartPrimaryControlPlane
	I0425 20:04:12.620405   71966 kubeadm.go:393] duration metric: took 11.025567867s to StartCluster
	I0425 20:04:12.620419   71966 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.620492   71966 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:04:12.623272   71966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:04:12.623577   71966 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:04:12.625335   71966 out.go:177] * Verifying Kubernetes components...
	I0425 20:04:12.623608   71966 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:04:12.623775   71966 config.go:182] Loaded profile config "embed-certs-512173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:04:12.626619   71966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:04:12.626625   71966 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-512173"
	I0425 20:04:12.626642   71966 addons.go:69] Setting metrics-server=true in profile "embed-certs-512173"
	I0425 20:04:12.626664   71966 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-512173"
	W0425 20:04:12.626674   71966 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:04:12.626681   71966 addons.go:234] Setting addon metrics-server=true in "embed-certs-512173"
	W0425 20:04:12.626690   71966 addons.go:243] addon metrics-server should already be in state true
	I0425 20:04:12.626623   71966 addons.go:69] Setting default-storageclass=true in profile "embed-certs-512173"
	I0425 20:04:12.626709   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626714   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.626718   71966 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-512173"
	I0425 20:04:12.626985   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627013   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627020   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627035   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.627088   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.627130   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.642680   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0425 20:04:12.642798   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0425 20:04:12.642972   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43999
	I0425 20:04:12.643182   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643288   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643418   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.643671   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643696   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643871   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643884   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.643893   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.643915   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.644227   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644235   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644403   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.644431   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.644819   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.644942   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.644980   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.645022   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.647992   71966 addons.go:234] Setting addon default-storageclass=true in "embed-certs-512173"
	W0425 20:04:12.648011   71966 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:04:12.648045   71966 host.go:66] Checking if "embed-certs-512173" exists ...
	I0425 20:04:12.648393   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.648429   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.660989   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41421
	I0425 20:04:12.661534   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.662561   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.662592   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.662614   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40637
	I0425 20:04:12.662804   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0425 20:04:12.662947   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663016   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663116   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.663173   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.663515   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663547   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663585   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.663604   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.663882   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.663920   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.664096   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.664487   71966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:04:12.664506   71966 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:04:12.665031   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.667087   71966 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:04:12.668326   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:04:12.668343   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:04:12.668361   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.666460   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.669907   71966 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:04:09.373628   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:11.376301   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.671391   71966 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.671411   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:04:12.671427   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.671566   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672113   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.672132   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.672233   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.672353   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.672439   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.672525   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.674511   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.674926   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.674951   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.675178   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.675357   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.675505   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.675662   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.683720   71966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39673
	I0425 20:04:12.684195   71966 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:04:12.684736   71966 main.go:141] libmachine: Using API Version  1
	I0425 20:04:12.684755   71966 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:04:12.685100   71966 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:04:12.685282   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetState
	I0425 20:04:12.687009   71966 main.go:141] libmachine: (embed-certs-512173) Calling .DriverName
	I0425 20:04:12.687257   71966 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.687277   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:04:12.687325   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHHostname
	I0425 20:04:12.689958   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690356   71966 main.go:141] libmachine: (embed-certs-512173) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:60:a2", ip: ""} in network mk-embed-certs-512173: {Iface:virbr1 ExpiryTime:2024-04-25 21:03:46 +0000 UTC Type:0 Mac:52:54:00:31:60:a2 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:embed-certs-512173 Clientid:01:52:54:00:31:60:a2}
	I0425 20:04:12.690374   71966 main.go:141] libmachine: (embed-certs-512173) DBG | domain embed-certs-512173 has defined IP address 192.168.50.7 and MAC address 52:54:00:31:60:a2 in network mk-embed-certs-512173
	I0425 20:04:12.690446   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHPort
	I0425 20:04:12.690655   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHKeyPath
	I0425 20:04:12.690841   71966 main.go:141] libmachine: (embed-certs-512173) Calling .GetSSHUsername
	I0425 20:04:12.690989   71966 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/embed-certs-512173/id_rsa Username:docker}
	I0425 20:04:12.846840   71966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:04:12.865045   71966 node_ready.go:35] waiting up to 6m0s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:12.938848   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:04:12.938875   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:04:12.941038   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:04:12.959316   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:04:12.977813   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:04:12.977841   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:04:13.050586   71966 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:13.050610   71966 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:04:13.111207   71966 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:04:14.253195   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.31212607s)
	I0425 20:04:14.253252   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253247   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.293897647s)
	I0425 20:04:14.253268   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253303   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253371   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253625   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253641   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253650   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253656   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253677   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.253690   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253699   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.253711   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.253876   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254099   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.253911   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253949   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.253977   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.254193   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.260565   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.260584   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.260830   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.260850   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.342979   71966 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.231720554s)
	I0425 20:04:14.343042   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343067   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343349   71966 main.go:141] libmachine: (embed-certs-512173) DBG | Closing plugin on server side
	I0425 20:04:14.343358   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343374   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343390   71966 main.go:141] libmachine: Making call to close driver server
	I0425 20:04:14.343398   71966 main.go:141] libmachine: (embed-certs-512173) Calling .Close
	I0425 20:04:14.343602   71966 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:04:14.343623   71966 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:04:14.343633   71966 addons.go:470] Verifying addon metrics-server=true in "embed-certs-512173"
	I0425 20:04:14.346631   71966 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:04:14.347936   71966 addons.go:505] duration metric: took 1.724328435s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:04:14.869074   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:11.383960   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:13.384840   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.883656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:12.707816   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.207561   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.708264   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.207822   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:14.707509   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.207507   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:15.707899   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.208254   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:16.708246   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:17.207508   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:13.873212   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:15.873263   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:18.373183   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:16.870001   71966 node_ready.go:53] node "embed-certs-512173" has status "Ready":"False"
	I0425 20:04:18.368960   71966 node_ready.go:49] node "embed-certs-512173" has status "Ready":"True"
	I0425 20:04:18.368991   71966 node_ready.go:38] duration metric: took 5.503919958s for node "embed-certs-512173" to be "Ready" ...
	I0425 20:04:18.369003   71966 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:04:18.375440   71966 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380902   71966 pod_ready.go:92] pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.380920   71966 pod_ready.go:81] duration metric: took 5.456921ms for pod "coredns-7db6d8ff4d-xsptj" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.380928   71966 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386330   71966 pod_ready.go:92] pod "etcd-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.386386   71966 pod_ready.go:81] duration metric: took 5.451019ms for pod "etcd-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.386402   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391115   71966 pod_ready.go:92] pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:18.391138   71966 pod_ready.go:81] duration metric: took 4.727835ms for pod "kube-apiserver-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:18.391149   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:20.398316   71966 pod_ready.go:102] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.885191   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:20.384439   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:17.707948   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.207953   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:18.707659   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.207609   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:19.707567   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.207989   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.707938   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.208305   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:21.707827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:22.207940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:20.374376   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.873180   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.899221   71966 pod_ready.go:92] pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.899240   71966 pod_ready.go:81] duration metric: took 4.508083804s for pod "kube-controller-manager-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.899250   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904904   71966 pod_ready.go:92] pod "kube-proxy-8247p" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.904922   71966 pod_ready.go:81] duration metric: took 5.665557ms for pod "kube-proxy-8247p" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.904929   71966 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910035   71966 pod_ready.go:92] pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace has status "Ready":"True"
	I0425 20:04:22.910051   71966 pod_ready.go:81] duration metric: took 5.116298ms for pod "kube-scheduler-embed-certs-512173" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:22.910059   71966 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	I0425 20:04:24.919233   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.884480   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:25.384287   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:22.707381   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.207532   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:23.707461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.208239   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:24.707742   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.208365   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.707323   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.207485   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:26.707727   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:27.208332   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:25.373538   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.872428   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.420297   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.918808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.385722   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:29.883321   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:27.707275   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.207776   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:28.708096   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.207685   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.708249   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.207647   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:30.707943   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.207471   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:31.707902   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:32.207582   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:29.872576   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.372818   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.416593   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:34.416976   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:31.884120   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:33.885341   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:35.886190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:32.708066   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.208090   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:33.707474   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.207664   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.708110   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.208160   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:35.707940   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.207505   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:36.708334   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:37.207939   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:34.375813   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.873166   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:36.417945   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.916796   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:38.384530   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.384673   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:37.707256   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.207621   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.708237   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.208327   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:39.707542   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.207371   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:40.708300   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.207577   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:41.708097   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:42.207684   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:38.876272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:41.372217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:40.918223   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:43.420086   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.389390   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:44.885243   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:42.708257   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.207407   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:43.707548   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:43.707618   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:43.753656   72712 cri.go:89] found id: ""
	I0425 20:04:43.753686   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.753698   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:43.753706   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:43.753770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:43.797957   72712 cri.go:89] found id: ""
	I0425 20:04:43.797982   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.797991   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:43.797996   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:43.798051   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:43.836700   72712 cri.go:89] found id: ""
	I0425 20:04:43.836729   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.836737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:43.836742   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:43.836795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:43.883452   72712 cri.go:89] found id: ""
	I0425 20:04:43.883478   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.883486   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:43.883492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:43.883544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:43.929975   72712 cri.go:89] found id: ""
	I0425 20:04:43.930004   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.930014   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:43.930022   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:43.930089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:43.967648   72712 cri.go:89] found id: ""
	I0425 20:04:43.967681   72712 logs.go:276] 0 containers: []
	W0425 20:04:43.967693   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:43.967701   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:43.967758   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:44.011024   72712 cri.go:89] found id: ""
	I0425 20:04:44.011048   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.011072   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:44.011078   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:44.011129   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:44.050233   72712 cri.go:89] found id: ""
	I0425 20:04:44.050263   72712 logs.go:276] 0 containers: []
	W0425 20:04:44.050274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:44.050286   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:44.050302   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:44.196275   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:44.196307   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:44.196323   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:44.260707   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:44.260748   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:44.306051   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:44.306090   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:44.357643   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:44.357682   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:46.875982   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:46.890987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:46.891062   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:46.935855   72712 cri.go:89] found id: ""
	I0425 20:04:46.935878   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.935885   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:46.935891   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:46.935948   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:46.978634   72712 cri.go:89] found id: ""
	I0425 20:04:46.978662   72712 logs.go:276] 0 containers: []
	W0425 20:04:46.978674   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:46.978681   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:46.978749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:47.019845   72712 cri.go:89] found id: ""
	I0425 20:04:47.019864   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.019872   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:47.019877   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:47.019933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:47.065002   72712 cri.go:89] found id: ""
	I0425 20:04:47.065040   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.065064   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:47.065072   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:47.065139   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:47.106370   72712 cri.go:89] found id: ""
	I0425 20:04:47.106404   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.106416   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:47.106423   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:47.106483   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:47.143851   72712 cri.go:89] found id: ""
	I0425 20:04:47.143874   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.143883   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:47.143888   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:47.143932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:47.186130   72712 cri.go:89] found id: ""
	I0425 20:04:47.186160   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.186168   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:47.186174   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:47.186238   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:47.228959   72712 cri.go:89] found id: ""
	I0425 20:04:47.228984   72712 logs.go:276] 0 containers: []
	W0425 20:04:47.228992   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:47.229000   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:47.229010   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:47.299852   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:47.299893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:47.346078   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:47.346111   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:43.872670   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:46.373259   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:45.917948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.919494   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:50.420952   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.388353   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:49.884300   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:47.405897   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:47.405932   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:47.424426   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:47.424455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:47.506603   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.007697   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:50.023258   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:50.023333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:50.066794   72712 cri.go:89] found id: ""
	I0425 20:04:50.066827   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.066836   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:50.066842   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:50.066913   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:50.109167   72712 cri.go:89] found id: ""
	I0425 20:04:50.109200   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.109212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:50.109219   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:50.109306   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:50.151854   72712 cri.go:89] found id: ""
	I0425 20:04:50.151878   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.151886   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:50.151892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:50.151940   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:50.190600   72712 cri.go:89] found id: ""
	I0425 20:04:50.190632   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.190644   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:50.190672   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:50.190742   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:50.232851   72712 cri.go:89] found id: ""
	I0425 20:04:50.232874   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.232883   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:50.232889   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:50.232935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:50.274941   72712 cri.go:89] found id: ""
	I0425 20:04:50.274971   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.274983   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:50.274990   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:50.275069   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:50.320954   72712 cri.go:89] found id: ""
	I0425 20:04:50.320981   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.320992   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:50.320999   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:50.321068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:50.361799   72712 cri.go:89] found id: ""
	I0425 20:04:50.361829   72712 logs.go:276] 0 containers: []
	W0425 20:04:50.361839   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:50.361847   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:50.361858   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:50.457792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:50.457819   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:50.457834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:50.539653   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:50.539702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:50.598740   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:50.598774   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:50.650501   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:50.650533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:48.872490   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.374484   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:52.919420   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:55.420126   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:51.887536   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:54.389174   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:53.167827   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:53.183324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:53.183403   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:53.227598   72712 cri.go:89] found id: ""
	I0425 20:04:53.227641   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.227650   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:53.227655   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:53.227700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:53.271170   72712 cri.go:89] found id: ""
	I0425 20:04:53.271200   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.271212   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:53.271220   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:53.271304   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:53.318185   72712 cri.go:89] found id: ""
	I0425 20:04:53.318233   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.318246   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:53.318255   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:53.318324   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:53.372199   72712 cri.go:89] found id: ""
	I0425 20:04:53.372228   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.372238   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:53.372244   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:53.372367   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:53.414048   72712 cri.go:89] found id: ""
	I0425 20:04:53.414080   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.414091   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:53.414099   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:53.414170   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:53.455746   72712 cri.go:89] found id: ""
	I0425 20:04:53.455806   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.455819   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:53.455827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:53.455901   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:53.497969   72712 cri.go:89] found id: ""
	I0425 20:04:53.497996   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.498004   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:53.498011   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:53.498057   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:53.543642   72712 cri.go:89] found id: ""
	I0425 20:04:53.543668   72712 logs.go:276] 0 containers: []
	W0425 20:04:53.543675   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:53.543684   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:53.543693   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:53.596106   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:53.596144   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:53.612755   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:53.612787   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:53.693068   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:53.693089   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:53.693102   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:53.771499   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:53.771535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:56.322663   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:56.336866   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:56.336945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:56.375515   72712 cri.go:89] found id: ""
	I0425 20:04:56.375556   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.375567   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:56.375574   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:56.375641   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:56.423230   72712 cri.go:89] found id: ""
	I0425 20:04:56.423261   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.423273   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:56.423281   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:56.423366   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:56.467786   72712 cri.go:89] found id: ""
	I0425 20:04:56.467814   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.467835   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:56.467842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:56.467895   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:56.517671   72712 cri.go:89] found id: ""
	I0425 20:04:56.517696   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.517708   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:56.517715   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:56.517770   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:56.558622   72712 cri.go:89] found id: ""
	I0425 20:04:56.558651   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.558662   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:56.558669   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:56.558746   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:56.601350   72712 cri.go:89] found id: ""
	I0425 20:04:56.601374   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.601382   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:56.601387   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:56.601444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:56.645892   72712 cri.go:89] found id: ""
	I0425 20:04:56.645923   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.645934   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:56.645940   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:56.646001   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:56.691619   72712 cri.go:89] found id: ""
	I0425 20:04:56.691645   72712 logs.go:276] 0 containers: []
	W0425 20:04:56.691656   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:56.691665   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:56.691679   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:56.744854   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:56.744891   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:56.762523   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:56.762556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:04:56.843396   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:04:56.843422   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:04:56.843437   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:04:56.933785   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:04:56.933825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:04:53.872514   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.372956   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:58.373649   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:57.917208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.920979   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:56.884907   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.385506   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:04:59.481512   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:04:59.497510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:04:59.497588   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:04:59.547382   72712 cri.go:89] found id: ""
	I0425 20:04:59.547412   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.547423   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:04:59.547432   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:04:59.547486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:04:59.597671   72712 cri.go:89] found id: ""
	I0425 20:04:59.597699   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.597711   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:04:59.597717   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:04:59.597762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:04:59.641455   72712 cri.go:89] found id: ""
	I0425 20:04:59.641486   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.641497   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:04:59.641503   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:04:59.641613   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:04:59.685052   72712 cri.go:89] found id: ""
	I0425 20:04:59.685092   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.685104   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:04:59.685112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:04:59.685173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:04:59.735912   72712 cri.go:89] found id: ""
	I0425 20:04:59.735943   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.735951   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:04:59.735957   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:04:59.736025   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:04:59.799294   72712 cri.go:89] found id: ""
	I0425 20:04:59.799322   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.799332   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:04:59.799338   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:04:59.799395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:04:59.871270   72712 cri.go:89] found id: ""
	I0425 20:04:59.871297   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.871308   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:04:59.871315   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:04:59.871377   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:04:59.919001   72712 cri.go:89] found id: ""
	I0425 20:04:59.919091   72712 logs.go:276] 0 containers: []
	W0425 20:04:59.919110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:04:59.919120   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:04:59.919135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:04:59.973458   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:04:59.973498   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:04:59.989729   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:04:59.989757   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:00.072887   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:00.072911   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:00.072926   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:00.153886   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:00.153921   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:00.873812   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.372969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.417960   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:04.420353   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:01.885238   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:03.887277   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:02.707465   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:02.722771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:02.722831   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:02.770101   72712 cri.go:89] found id: ""
	I0425 20:05:02.770134   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.770147   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:02.770154   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:02.770224   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:02.817819   72712 cri.go:89] found id: ""
	I0425 20:05:02.817854   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.817865   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:02.817898   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:02.817963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:02.857036   72712 cri.go:89] found id: ""
	I0425 20:05:02.857066   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.857077   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:02.857085   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:02.857144   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:02.900112   72712 cri.go:89] found id: ""
	I0425 20:05:02.900145   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.900157   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:02.900164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:02.900221   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:02.941079   72712 cri.go:89] found id: ""
	I0425 20:05:02.941109   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.941116   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:02.941121   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:02.941198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:02.983458   72712 cri.go:89] found id: ""
	I0425 20:05:02.983490   72712 logs.go:276] 0 containers: []
	W0425 20:05:02.983502   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:02.983510   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:02.983574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:03.025424   72712 cri.go:89] found id: ""
	I0425 20:05:03.025451   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.025462   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:03.025469   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:03.025556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:03.065285   72712 cri.go:89] found id: ""
	I0425 20:05:03.065316   72712 logs.go:276] 0 containers: []
	W0425 20:05:03.065328   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:03.065340   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:03.065351   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:03.121235   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:03.121267   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:03.138036   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:03.138073   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:03.213604   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:03.213638   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:03.213655   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:03.296696   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:03.296741   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.842642   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:05.859125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:05.859199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:05.906505   72712 cri.go:89] found id: ""
	I0425 20:05:05.906529   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.906537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:05.906542   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:05.906595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:05.950793   72712 cri.go:89] found id: ""
	I0425 20:05:05.950819   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.950831   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:05.950838   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:05.950902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:05.991612   72712 cri.go:89] found id: ""
	I0425 20:05:05.991644   72712 logs.go:276] 0 containers: []
	W0425 20:05:05.991654   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:05.991661   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:05.991755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:06.032273   72712 cri.go:89] found id: ""
	I0425 20:05:06.032314   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.032326   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:06.032334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:06.032392   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:06.071802   72712 cri.go:89] found id: ""
	I0425 20:05:06.071833   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.071844   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:06.071852   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:06.071908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:06.116676   72712 cri.go:89] found id: ""
	I0425 20:05:06.116702   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.116710   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:06.116716   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:06.116759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:06.154720   72712 cri.go:89] found id: ""
	I0425 20:05:06.154753   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.154765   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:06.154771   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:06.154842   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:06.196421   72712 cri.go:89] found id: ""
	I0425 20:05:06.196457   72712 logs.go:276] 0 containers: []
	W0425 20:05:06.196469   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:06.196480   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:06.196493   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:06.251061   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:06.251122   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:06.267764   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:06.267799   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:06.345302   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:06.345334   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:06.345349   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:06.427836   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:06.427868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:05.873928   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.372014   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.422386   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.916659   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:06.384700   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.883611   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:10.885814   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:08.989442   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:09.004493   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:09.004551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:09.056062   72712 cri.go:89] found id: ""
	I0425 20:05:09.056086   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.056096   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:09.056101   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:09.056148   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:09.096791   72712 cri.go:89] found id: ""
	I0425 20:05:09.096817   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.096827   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:09.096834   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:09.096889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:09.134649   72712 cri.go:89] found id: ""
	I0425 20:05:09.134680   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.134691   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:09.134698   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:09.134757   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:09.175980   72712 cri.go:89] found id: ""
	I0425 20:05:09.176010   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.176021   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:09.176028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:09.176084   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:09.216263   72712 cri.go:89] found id: ""
	I0425 20:05:09.216299   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.216313   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:09.216325   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:09.216395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:09.260498   72712 cri.go:89] found id: ""
	I0425 20:05:09.260528   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.260538   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:09.260544   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:09.260603   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:09.303154   72712 cri.go:89] found id: ""
	I0425 20:05:09.303178   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.303201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:09.303209   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:09.303269   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:09.350798   72712 cri.go:89] found id: ""
	I0425 20:05:09.350829   72712 logs.go:276] 0 containers: []
	W0425 20:05:09.350840   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:09.350852   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:09.350868   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:09.405295   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:09.405332   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:09.422788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:09.422820   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:09.501819   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:09.501841   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:09.501855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:09.586938   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:09.586981   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:12.132731   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:12.148860   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:12.148935   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:12.194021   72712 cri.go:89] found id: ""
	I0425 20:05:12.194051   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.194064   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:12.194072   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:12.194152   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:12.234680   72712 cri.go:89] found id: ""
	I0425 20:05:12.234710   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.234721   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:12.234728   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:12.234792   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:12.277751   72712 cri.go:89] found id: ""
	I0425 20:05:12.277783   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.277794   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:12.277802   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:12.277864   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:12.324068   72712 cri.go:89] found id: ""
	I0425 20:05:12.324100   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.324117   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:12.324125   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:12.324187   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:10.374594   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.873217   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:11.424208   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.425980   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:13.387259   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.884337   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:12.366797   72712 cri.go:89] found id: ""
	I0425 20:05:12.366825   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.366837   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:12.366844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:12.366903   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:12.413092   72712 cri.go:89] found id: ""
	I0425 20:05:12.413120   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.413132   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:12.413139   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:12.413198   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:12.461229   72712 cri.go:89] found id: ""
	I0425 20:05:12.461253   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.461262   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:12.461268   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:12.461333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:12.504646   72712 cri.go:89] found id: ""
	I0425 20:05:12.504669   72712 logs.go:276] 0 containers: []
	W0425 20:05:12.504677   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:12.504685   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:12.504698   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:12.561630   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:12.561673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:12.578043   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:12.578069   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:12.655176   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:12.655195   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:12.655209   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:12.736323   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:12.736357   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.287503   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:15.302830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:15.302893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:15.339479   72712 cri.go:89] found id: ""
	I0425 20:05:15.339509   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.339519   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:15.339527   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:15.339589   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:15.381431   72712 cri.go:89] found id: ""
	I0425 20:05:15.381458   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.381467   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:15.381475   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:15.381537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:15.423729   72712 cri.go:89] found id: ""
	I0425 20:05:15.423755   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.423767   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:15.423774   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:15.423833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:15.464367   72712 cri.go:89] found id: ""
	I0425 20:05:15.464401   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.464413   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:15.464421   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:15.464489   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:15.508306   72712 cri.go:89] found id: ""
	I0425 20:05:15.508336   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.508348   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:15.508356   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:15.508419   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:15.548572   72712 cri.go:89] found id: ""
	I0425 20:05:15.548600   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.548610   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:15.548616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:15.548678   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:15.592885   72712 cri.go:89] found id: ""
	I0425 20:05:15.592914   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.592926   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:15.592933   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:15.592992   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:15.632817   72712 cri.go:89] found id: ""
	I0425 20:05:15.632855   72712 logs.go:276] 0 containers: []
	W0425 20:05:15.632868   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:15.632880   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:15.632900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:15.648443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:15.648470   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:15.726167   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:15.726191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:15.726229   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:15.803028   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:15.803066   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:15.850519   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:15.850552   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:14.873291   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:17.372118   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:15.917932   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.420096   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.384555   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.885930   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:18.404671   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:18.422600   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:18.422663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:18.476977   72712 cri.go:89] found id: ""
	I0425 20:05:18.477001   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.477009   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:18.477021   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:18.477093   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:18.525595   72712 cri.go:89] found id: ""
	I0425 20:05:18.525631   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.525641   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:18.525648   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:18.525714   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:18.565485   72712 cri.go:89] found id: ""
	I0425 20:05:18.565513   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.565523   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:18.565531   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:18.565600   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:18.612059   72712 cri.go:89] found id: ""
	I0425 20:05:18.612096   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.612106   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:18.612112   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:18.612173   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:18.659407   72712 cri.go:89] found id: ""
	I0425 20:05:18.659438   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.659449   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:18.659456   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:18.659507   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:18.701065   72712 cri.go:89] found id: ""
	I0425 20:05:18.701092   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.701101   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:18.701106   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:18.701201   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:18.738234   72712 cri.go:89] found id: ""
	I0425 20:05:18.738264   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.738276   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:18.738284   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:18.738343   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:18.780460   72712 cri.go:89] found id: ""
	I0425 20:05:18.780489   72712 logs.go:276] 0 containers: []
	W0425 20:05:18.780498   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:18.780514   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:18.780526   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:18.834345   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:18.834378   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:18.850006   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:18.850033   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:18.932146   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:18.932171   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:18.932185   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:19.015036   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:19.015068   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:21.568250   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:21.582519   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:21.582595   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:21.622886   72712 cri.go:89] found id: ""
	I0425 20:05:21.622913   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.622920   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:21.622925   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:21.622974   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:21.664832   72712 cri.go:89] found id: ""
	I0425 20:05:21.664860   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.664874   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:21.664882   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:21.664950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:21.703801   72712 cri.go:89] found id: ""
	I0425 20:05:21.703829   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.703843   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:21.703850   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:21.703911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:21.741502   72712 cri.go:89] found id: ""
	I0425 20:05:21.741540   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.741549   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:21.741555   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:21.741612   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:21.783715   72712 cri.go:89] found id: ""
	I0425 20:05:21.783745   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.783754   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:21.783759   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:21.783803   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:21.822806   72712 cri.go:89] found id: ""
	I0425 20:05:21.822842   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.822851   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:21.822856   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:21.822915   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:21.864996   72712 cri.go:89] found id: ""
	I0425 20:05:21.865020   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.865030   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:21.865037   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:21.865092   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:21.907533   72712 cri.go:89] found id: ""
	I0425 20:05:21.907563   72712 logs.go:276] 0 containers: []
	W0425 20:05:21.907575   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:21.907585   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:21.907601   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:21.964226   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:21.964260   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:21.980096   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:21.980123   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:22.059516   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:22.059539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:22.059566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:22.136752   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:22.136784   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:19.373290   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:21.873377   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:20.916720   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:22.917156   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.918191   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:23.384566   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:25.885793   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:24.682139   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:24.697495   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:24.697564   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:24.739725   72712 cri.go:89] found id: ""
	I0425 20:05:24.739750   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.739760   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:24.739766   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:24.739824   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:24.777455   72712 cri.go:89] found id: ""
	I0425 20:05:24.777485   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.777497   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:24.777504   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:24.777566   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:24.821729   72712 cri.go:89] found id: ""
	I0425 20:05:24.821761   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.821774   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:24.821782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:24.821845   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:24.861745   72712 cri.go:89] found id: ""
	I0425 20:05:24.861773   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.861784   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:24.861791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:24.861851   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:24.903441   72712 cri.go:89] found id: ""
	I0425 20:05:24.903470   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.903479   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:24.903486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:24.903544   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:24.943589   72712 cri.go:89] found id: ""
	I0425 20:05:24.943618   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.943629   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:24.943637   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:24.943717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:24.983629   72712 cri.go:89] found id: ""
	I0425 20:05:24.983661   72712 logs.go:276] 0 containers: []
	W0425 20:05:24.983672   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:24.983680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:24.983739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:25.022413   72712 cri.go:89] found id: ""
	I0425 20:05:25.022441   72712 logs.go:276] 0 containers: []
	W0425 20:05:25.022451   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:25.022462   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:25.022477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:25.077402   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:25.077438   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:25.094488   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:25.094517   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:25.171485   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:25.171515   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:25.171535   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:25.251131   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:25.251166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:24.373762   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:26.873969   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.420395   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:29.420994   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:28.384247   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:30.883795   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:27.797359   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:27.813601   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:27.813659   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:27.854017   72712 cri.go:89] found id: ""
	I0425 20:05:27.854051   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.854061   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:27.854066   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:27.854117   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:27.900425   72712 cri.go:89] found id: ""
	I0425 20:05:27.900451   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.900461   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:27.900468   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:27.900531   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:27.940064   72712 cri.go:89] found id: ""
	I0425 20:05:27.940096   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.940107   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:27.940114   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:27.940174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:27.979363   72712 cri.go:89] found id: ""
	I0425 20:05:27.979385   72712 logs.go:276] 0 containers: []
	W0425 20:05:27.979393   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:27.979399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:27.979442   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:28.019702   72712 cri.go:89] found id: ""
	I0425 20:05:28.019723   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.019731   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:28.019736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:28.019798   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:28.058711   72712 cri.go:89] found id: ""
	I0425 20:05:28.058740   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.058748   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:28.058755   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:28.058810   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:28.104465   72712 cri.go:89] found id: ""
	I0425 20:05:28.104495   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.104507   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:28.104515   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:28.104577   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:28.142399   72712 cri.go:89] found id: ""
	I0425 20:05:28.142431   72712 logs.go:276] 0 containers: []
	W0425 20:05:28.142440   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:28.142449   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:28.142460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:28.222763   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:28.222786   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:28.222801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:28.299797   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:28.299838   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:28.366569   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:28.366594   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:28.424581   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:28.424628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:30.942526   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:30.957400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:30.957482   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:30.996931   72712 cri.go:89] found id: ""
	I0425 20:05:30.996958   72712 logs.go:276] 0 containers: []
	W0425 20:05:30.996967   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:30.996974   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:30.997029   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:31.035673   72712 cri.go:89] found id: ""
	I0425 20:05:31.035700   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.035712   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:31.035719   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:31.035782   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:31.075783   72712 cri.go:89] found id: ""
	I0425 20:05:31.075809   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.075820   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:31.075826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:31.075886   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:31.114229   72712 cri.go:89] found id: ""
	I0425 20:05:31.114257   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.114267   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:31.114274   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:31.114333   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:31.155385   72712 cri.go:89] found id: ""
	I0425 20:05:31.155409   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.155419   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:31.155427   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:31.155486   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:31.193772   72712 cri.go:89] found id: ""
	I0425 20:05:31.193804   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.193815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:31.193823   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:31.193878   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:31.233886   72712 cri.go:89] found id: ""
	I0425 20:05:31.233909   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.233917   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:31.233923   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:31.233967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:31.273427   72712 cri.go:89] found id: ""
	I0425 20:05:31.273455   72712 logs.go:276] 0 containers: []
	W0425 20:05:31.273465   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:31.273476   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:31.273491   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:31.354429   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:31.354462   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:31.406018   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:31.406047   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:31.460972   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:31.461007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:31.477485   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:31.477513   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:31.551616   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:29.371357   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.373007   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:31.421948   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.424866   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:33.384577   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.884780   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:34.052808   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:34.068068   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:34.068158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:34.120984   72712 cri.go:89] found id: ""
	I0425 20:05:34.121016   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.121024   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:34.121032   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:34.121082   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:34.160646   72712 cri.go:89] found id: ""
	I0425 20:05:34.160676   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.160687   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:34.160694   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:34.160752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:34.202641   72712 cri.go:89] found id: ""
	I0425 20:05:34.202665   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.202671   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:34.202677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:34.202733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:34.244352   72712 cri.go:89] found id: ""
	I0425 20:05:34.244379   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.244391   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:34.244400   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:34.244460   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:34.285858   72712 cri.go:89] found id: ""
	I0425 20:05:34.285885   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.285896   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:34.285904   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:34.285956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:34.323634   72712 cri.go:89] found id: ""
	I0425 20:05:34.323662   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.323673   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:34.323681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:34.323739   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:34.365230   72712 cri.go:89] found id: ""
	I0425 20:05:34.365256   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.365272   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:34.365280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:34.365339   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:34.409329   72712 cri.go:89] found id: ""
	I0425 20:05:34.409354   72712 logs.go:276] 0 containers: []
	W0425 20:05:34.409365   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:34.409376   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:34.409390   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:34.464575   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:34.464606   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:34.480244   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:34.480270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:34.560204   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:34.560224   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:34.560236   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:34.640152   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:34.640187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:37.189992   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:37.204683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:37.204786   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:37.245857   72712 cri.go:89] found id: ""
	I0425 20:05:37.245891   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.245903   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:37.245910   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:37.245969   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:37.284668   72712 cri.go:89] found id: ""
	I0425 20:05:37.284696   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.284704   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:37.284710   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:37.284762   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:37.324349   72712 cri.go:89] found id: ""
	I0425 20:05:37.324379   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.324391   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:37.324399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:37.324461   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:33.872836   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.873214   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.373278   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:35.917308   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.419746   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:38.383933   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.385166   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:37.361764   72712 cri.go:89] found id: ""
	I0425 20:05:37.361787   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.361800   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:37.361811   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:37.361857   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:37.404331   72712 cri.go:89] found id: ""
	I0425 20:05:37.404353   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.404360   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:37.404366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:37.404430   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:37.445284   72712 cri.go:89] found id: ""
	I0425 20:05:37.445316   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.445327   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:37.445334   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:37.445395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:37.483806   72712 cri.go:89] found id: ""
	I0425 20:05:37.483828   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.483837   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:37.483843   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:37.483888   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:37.524649   72712 cri.go:89] found id: ""
	I0425 20:05:37.524673   72712 logs.go:276] 0 containers: []
	W0425 20:05:37.524680   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:37.524689   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:37.524701   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:37.581521   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:37.581553   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:37.598459   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:37.598487   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:37.671236   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:37.671256   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:37.671272   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:37.750517   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:37.750556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.293743   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:40.310344   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:40.310426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:40.356157   72712 cri.go:89] found id: ""
	I0425 20:05:40.356198   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.356208   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:40.356215   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:40.356277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:40.397857   72712 cri.go:89] found id: ""
	I0425 20:05:40.397886   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.397895   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:40.397902   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:40.397964   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:40.445034   72712 cri.go:89] found id: ""
	I0425 20:05:40.445057   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.445065   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:40.445071   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:40.445126   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:40.493744   72712 cri.go:89] found id: ""
	I0425 20:05:40.493773   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.493783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:40.493797   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:40.493856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:40.550546   72712 cri.go:89] found id: ""
	I0425 20:05:40.550572   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.550580   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:40.550587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:40.550654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:40.605122   72712 cri.go:89] found id: ""
	I0425 20:05:40.605153   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.605164   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:40.605172   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:40.605232   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:40.675713   72712 cri.go:89] found id: ""
	I0425 20:05:40.675745   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.675755   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:40.675769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:40.675828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:40.716064   72712 cri.go:89] found id: ""
	I0425 20:05:40.716093   72712 logs.go:276] 0 containers: []
	W0425 20:05:40.716101   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:40.716109   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:40.716120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:40.781395   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:40.781441   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:40.797597   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:40.797628   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:40.880931   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:40.880956   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:40.880971   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:40.970770   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:40.970800   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:40.373398   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.873163   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:40.918560   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.417610   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:45.420963   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:42.883556   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:44.883719   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:43.520389   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:43.537668   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:43.537729   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:43.578137   72712 cri.go:89] found id: ""
	I0425 20:05:43.578166   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.578175   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:43.578180   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:43.578247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:43.617428   72712 cri.go:89] found id: ""
	I0425 20:05:43.617454   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.617462   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:43.617466   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:43.617519   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:43.655401   72712 cri.go:89] found id: ""
	I0425 20:05:43.655431   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.655443   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:43.655450   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:43.655514   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:43.695183   72712 cri.go:89] found id: ""
	I0425 20:05:43.695212   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.695230   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:43.695238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:43.695316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:43.735056   72712 cri.go:89] found id: ""
	I0425 20:05:43.735086   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.735098   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:43.735104   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:43.735162   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:43.774761   72712 cri.go:89] found id: ""
	I0425 20:05:43.774789   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.774799   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:43.774830   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:43.774889   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:43.819102   72712 cri.go:89] found id: ""
	I0425 20:05:43.819128   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.819138   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:43.819146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:43.819206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:43.858235   72712 cri.go:89] found id: ""
	I0425 20:05:43.858267   72712 logs.go:276] 0 containers: []
	W0425 20:05:43.858278   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:43.858289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:43.858303   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:43.940756   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:43.940794   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:43.985878   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:43.985925   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:44.040177   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:44.040207   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:44.055912   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:44.055942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:44.143724   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:46.643923   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:46.658863   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:46.658941   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:46.697826   72712 cri.go:89] found id: ""
	I0425 20:05:46.697850   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.697858   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:46.697884   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:46.697947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:46.739850   72712 cri.go:89] found id: ""
	I0425 20:05:46.739877   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.739888   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:46.739897   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:46.739955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:46.781212   72712 cri.go:89] found id: ""
	I0425 20:05:46.781241   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.781256   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:46.781262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:46.781321   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:46.826005   72712 cri.go:89] found id: ""
	I0425 20:05:46.826036   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.826047   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:46.826055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:46.826109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:46.865428   72712 cri.go:89] found id: ""
	I0425 20:05:46.865456   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.865465   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:46.865472   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:46.865522   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:46.914860   72712 cri.go:89] found id: ""
	I0425 20:05:46.914887   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.914897   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:46.914907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:46.914968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:46.955323   72712 cri.go:89] found id: ""
	I0425 20:05:46.955355   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.955365   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:46.955373   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:46.955436   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:46.999369   72712 cri.go:89] found id: ""
	I0425 20:05:46.999396   72712 logs.go:276] 0 containers: []
	W0425 20:05:46.999408   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:46.999419   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:46.999464   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:47.013865   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:47.013893   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:47.094725   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:47.094755   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:47.094771   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:47.178380   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:47.178426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:47.227217   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:47.227249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:45.375272   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.872640   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:47.917579   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.918001   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:46.884746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:48.884818   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:49.780217   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:49.795690   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:49.795760   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:49.834909   72712 cri.go:89] found id: ""
	I0425 20:05:49.834935   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.834943   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:49.834951   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:49.835004   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:49.872717   72712 cri.go:89] found id: ""
	I0425 20:05:49.872747   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.872755   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:49.872762   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:49.872807   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:49.919348   72712 cri.go:89] found id: ""
	I0425 20:05:49.919376   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.919387   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:49.919395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:49.919465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:49.959673   72712 cri.go:89] found id: ""
	I0425 20:05:49.959705   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.959716   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:49.959728   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:49.959796   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:49.999276   72712 cri.go:89] found id: ""
	I0425 20:05:49.999299   72712 logs.go:276] 0 containers: []
	W0425 20:05:49.999306   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:49.999312   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:49.999361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:50.037426   72712 cri.go:89] found id: ""
	I0425 20:05:50.037454   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.037461   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:50.037466   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:50.037510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:50.080666   72712 cri.go:89] found id: ""
	I0425 20:05:50.080695   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.080703   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:50.080719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:50.080776   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:50.126065   72712 cri.go:89] found id: ""
	I0425 20:05:50.126111   72712 logs.go:276] 0 containers: []
	W0425 20:05:50.126123   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:50.126134   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:50.126148   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:50.140778   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:50.140805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:50.213282   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:50.213308   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:50.213320   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:50.293798   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:50.293832   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:50.336823   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:50.336859   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:49.873685   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.372830   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.919781   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:54.417518   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:51.382698   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:53.392894   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:55.884231   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:52.892579   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:52.909556   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:52.909629   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:52.948098   72712 cri.go:89] found id: ""
	I0425 20:05:52.948127   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.948138   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:52.948146   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:52.948206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:52.988813   72712 cri.go:89] found id: ""
	I0425 20:05:52.988840   72712 logs.go:276] 0 containers: []
	W0425 20:05:52.988848   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:52.988853   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:52.988898   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:53.032181   72712 cri.go:89] found id: ""
	I0425 20:05:53.032211   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.032222   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:53.032230   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:53.032288   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:53.075496   72712 cri.go:89] found id: ""
	I0425 20:05:53.075528   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.075538   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:53.075543   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:53.075599   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:53.119037   72712 cri.go:89] found id: ""
	I0425 20:05:53.119070   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.119082   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:53.119095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:53.119158   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:53.158276   72712 cri.go:89] found id: ""
	I0425 20:05:53.158303   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.158314   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:53.158321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:53.158381   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:53.196168   72712 cri.go:89] found id: ""
	I0425 20:05:53.196199   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.196211   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:53.196219   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:53.196277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:53.235212   72712 cri.go:89] found id: ""
	I0425 20:05:53.235235   72712 logs.go:276] 0 containers: []
	W0425 20:05:53.235243   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:53.235250   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:53.235261   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:53.290435   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:53.290474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:53.306351   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:53.306380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:53.388623   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:53.388652   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:53.388666   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:53.480388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:53.480426   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:56.027403   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:56.042683   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:56.042755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:56.083672   72712 cri.go:89] found id: ""
	I0425 20:05:56.083706   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.083718   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:56.083725   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:56.083790   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:56.124071   72712 cri.go:89] found id: ""
	I0425 20:05:56.124105   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.124126   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:56.124134   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:56.124200   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:56.166692   72712 cri.go:89] found id: ""
	I0425 20:05:56.166724   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.166737   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:56.166744   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:56.166808   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:56.203833   72712 cri.go:89] found id: ""
	I0425 20:05:56.203871   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.203884   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:56.203892   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:56.203950   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:56.242277   72712 cri.go:89] found id: ""
	I0425 20:05:56.242319   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.242341   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:56.242349   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:56.242416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:56.281697   72712 cri.go:89] found id: ""
	I0425 20:05:56.281726   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.281733   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:56.281739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:56.281812   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:56.322190   72712 cri.go:89] found id: ""
	I0425 20:05:56.322233   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.322243   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:56.322248   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:56.322310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:56.364831   72712 cri.go:89] found id: ""
	I0425 20:05:56.364853   72712 logs.go:276] 0 containers: []
	W0425 20:05:56.364864   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:56.364875   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:56.364889   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:56.422824   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:56.422856   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:56.437619   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:56.437641   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:56.512938   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:56.512961   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:56.512977   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:56.598670   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:56.598708   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:05:54.872566   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.873184   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:56.917352   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.421645   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:58.383740   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:00.384113   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:05:59.150322   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:05:59.166883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:05:59.166956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:05:59.205086   72712 cri.go:89] found id: ""
	I0425 20:05:59.205112   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.205121   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:05:59.205126   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:05:59.205199   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:59.253430   72712 cri.go:89] found id: ""
	I0425 20:05:59.253458   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.253469   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:05:59.253478   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:05:59.253539   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:05:59.293691   72712 cri.go:89] found id: ""
	I0425 20:05:59.293719   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.293731   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:05:59.293738   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:05:59.293801   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:05:59.331580   72712 cri.go:89] found id: ""
	I0425 20:05:59.331604   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.331613   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:05:59.331619   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:05:59.331663   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:05:59.369985   72712 cri.go:89] found id: ""
	I0425 20:05:59.370012   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.370023   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:05:59.370031   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:05:59.370095   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:05:59.411636   72712 cri.go:89] found id: ""
	I0425 20:05:59.411662   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.411670   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:05:59.411676   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:05:59.411733   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:05:59.454735   72712 cri.go:89] found id: ""
	I0425 20:05:59.454762   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.454774   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:05:59.454782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:05:59.454839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:05:59.497664   72712 cri.go:89] found id: ""
	I0425 20:05:59.497694   72712 logs.go:276] 0 containers: []
	W0425 20:05:59.497704   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:05:59.497715   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:05:59.497731   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:05:59.556694   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:05:59.556728   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:05:59.572160   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:05:59.572187   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:05:59.649040   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:05:59.649063   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:05:59.649083   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:05:59.727941   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:05:59.727975   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.275513   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:02.290486   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:02.290557   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:02.332217   72712 cri.go:89] found id: ""
	I0425 20:06:02.332255   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.332273   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:02.332281   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:02.332357   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:05:58.873314   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.373601   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:01.916947   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.418479   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.384744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:04.885488   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:02.373346   72712 cri.go:89] found id: ""
	I0425 20:06:02.373370   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.373377   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:02.373382   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:02.373439   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:02.415835   72712 cri.go:89] found id: ""
	I0425 20:06:02.415861   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.415873   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:02.415881   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:02.415939   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:02.458876   72712 cri.go:89] found id: ""
	I0425 20:06:02.458905   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.458917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:02.458926   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:02.459008   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:02.502092   72712 cri.go:89] found id: ""
	I0425 20:06:02.502127   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.502138   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:02.502146   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:02.502235   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:02.546357   72712 cri.go:89] found id: ""
	I0425 20:06:02.546383   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.546393   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:02.546399   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:02.546459   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:02.586842   72712 cri.go:89] found id: ""
	I0425 20:06:02.586870   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.586881   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:02.586887   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:02.586932   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:02.629305   72712 cri.go:89] found id: ""
	I0425 20:06:02.629339   72712 logs.go:276] 0 containers: []
	W0425 20:06:02.629350   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:02.629360   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:02.629374   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:02.676583   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:02.676626   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:02.731790   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:02.731825   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:02.747473   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:02.747499   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:02.824265   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:02.824289   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:02.824304   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.408968   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:05.423645   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:05.423713   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:05.467402   72712 cri.go:89] found id: ""
	I0425 20:06:05.467425   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.467434   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:05.467445   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:05.467510   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:05.503131   72712 cri.go:89] found id: ""
	I0425 20:06:05.503153   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.503161   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:05.503166   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:05.503216   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:05.545694   72712 cri.go:89] found id: ""
	I0425 20:06:05.545721   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.545732   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:05.545739   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:05.545804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:05.585879   72712 cri.go:89] found id: ""
	I0425 20:06:05.585905   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.585912   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:05.585917   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:05.585963   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:05.625520   72712 cri.go:89] found id: ""
	I0425 20:06:05.625549   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.625560   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:05.625567   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:05.625620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:05.664306   72712 cri.go:89] found id: ""
	I0425 20:06:05.664335   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.664345   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:05.664364   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:05.664437   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:05.705353   72712 cri.go:89] found id: ""
	I0425 20:06:05.705385   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.705397   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:05.705405   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:05.705468   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:05.743935   72712 cri.go:89] found id: ""
	I0425 20:06:05.743968   72712 logs.go:276] 0 containers: []
	W0425 20:06:05.743977   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:05.743986   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:05.743997   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:05.801190   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:05.801234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:05.817046   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:05.817074   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:05.899413   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:05.899443   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:05.899458   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:05.986303   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:05.986336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:03.872605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:05.876833   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.373392   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.916334   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.917480   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:06.887784   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:09.387085   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:08.531748   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:08.550667   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:08.550749   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:08.594062   72712 cri.go:89] found id: ""
	I0425 20:06:08.594093   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.594102   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:08.594108   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:08.594163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:08.635823   72712 cri.go:89] found id: ""
	I0425 20:06:08.635861   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.635872   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:08.635880   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:08.635944   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:08.675338   72712 cri.go:89] found id: ""
	I0425 20:06:08.675383   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.675395   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:08.675402   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:08.675463   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:08.715971   72712 cri.go:89] found id: ""
	I0425 20:06:08.716001   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.716012   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:08.716019   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:08.716088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:08.758565   72712 cri.go:89] found id: ""
	I0425 20:06:08.758597   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.758608   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:08.758616   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:08.758683   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:08.800179   72712 cri.go:89] found id: ""
	I0425 20:06:08.800207   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.800218   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:08.800226   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:08.800286   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:08.854603   72712 cri.go:89] found id: ""
	I0425 20:06:08.854639   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.854651   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:08.854659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:08.854724   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:08.904115   72712 cri.go:89] found id: ""
	I0425 20:06:08.904141   72712 logs.go:276] 0 containers: []
	W0425 20:06:08.904152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:08.904162   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:08.904177   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:08.921826   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:08.921855   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:09.003667   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:09.003687   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:09.003699   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:09.086301   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:09.086346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:09.138478   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:09.138516   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:11.704402   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:11.721810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:11.721902   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:11.768790   72712 cri.go:89] found id: ""
	I0425 20:06:11.768829   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.768850   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:11.768858   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:11.768928   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:11.813543   72712 cri.go:89] found id: ""
	I0425 20:06:11.813576   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.813588   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:11.813595   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:11.813654   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:11.853930   72712 cri.go:89] found id: ""
	I0425 20:06:11.853962   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.853972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:11.853980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:11.854044   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:11.900808   72712 cri.go:89] found id: ""
	I0425 20:06:11.900843   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.900853   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:11.900861   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:11.900919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:11.948850   72712 cri.go:89] found id: ""
	I0425 20:06:11.948876   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.948885   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:11.948890   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:11.948945   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:11.989326   72712 cri.go:89] found id: ""
	I0425 20:06:11.989356   72712 logs.go:276] 0 containers: []
	W0425 20:06:11.989365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:11.989371   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:11.989450   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:12.033912   72712 cri.go:89] found id: ""
	I0425 20:06:12.033943   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.033954   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:12.033959   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:12.034015   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:12.076170   72712 cri.go:89] found id: ""
	I0425 20:06:12.076199   72712 logs.go:276] 0 containers: []
	W0425 20:06:12.076209   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:12.076217   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:12.076230   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:12.124851   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:12.124881   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:12.178927   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:12.178964   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:12.194925   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:12.194952   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:12.272163   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:12.272187   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:12.272202   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:10.374908   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.871613   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:10.917911   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:12.918144   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:15.419043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:11.886066   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.383880   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:14.851400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:14.869893   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:14.869967   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:14.915793   72712 cri.go:89] found id: ""
	I0425 20:06:14.915820   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.915829   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:14.915836   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:14.915896   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:14.959549   72712 cri.go:89] found id: ""
	I0425 20:06:14.959576   72712 logs.go:276] 0 containers: []
	W0425 20:06:14.959587   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:14.959606   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:14.959672   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:15.001420   72712 cri.go:89] found id: ""
	I0425 20:06:15.001453   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.001465   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:15.001474   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:15.001552   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:15.047960   72712 cri.go:89] found id: ""
	I0425 20:06:15.047988   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.047996   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:15.048001   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:15.048049   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:15.096688   72712 cri.go:89] found id: ""
	I0425 20:06:15.096722   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.096730   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:15.096736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:15.096795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:15.142673   72712 cri.go:89] found id: ""
	I0425 20:06:15.142701   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.142712   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:15.142719   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:15.142784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:15.181729   72712 cri.go:89] found id: ""
	I0425 20:06:15.181757   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.181766   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:15.181773   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:15.181820   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:15.227858   72712 cri.go:89] found id: ""
	I0425 20:06:15.227886   72712 logs.go:276] 0 containers: []
	W0425 20:06:15.227897   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:15.227905   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:15.227917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:15.283253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:15.283293   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:15.305572   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:15.305604   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:15.439587   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:15.439615   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:15.439631   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:15.525678   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:15.525714   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:14.872914   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.873605   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:17.420065   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:19.917501   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:16.383915   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.883746   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:20.884190   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:18.078788   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:18.095012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:18.095083   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:18.136753   72712 cri.go:89] found id: ""
	I0425 20:06:18.136784   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.136796   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:18.136802   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:18.136850   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:18.184584   72712 cri.go:89] found id: ""
	I0425 20:06:18.184606   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.184614   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:18.184619   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:18.184691   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:18.228201   72712 cri.go:89] found id: ""
	I0425 20:06:18.228250   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.228263   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:18.228270   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:18.228326   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:18.267756   72712 cri.go:89] found id: ""
	I0425 20:06:18.267778   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.267786   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:18.267792   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:18.267855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:18.309727   72712 cri.go:89] found id: ""
	I0425 20:06:18.309755   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.309763   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:18.309769   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:18.309827   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:18.350549   72712 cri.go:89] found id: ""
	I0425 20:06:18.350580   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.350592   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:18.350599   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:18.350656   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:18.393868   72712 cri.go:89] found id: ""
	I0425 20:06:18.393891   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.393902   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:18.393910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:18.393989   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:18.435163   72712 cri.go:89] found id: ""
	I0425 20:06:18.435195   72712 logs.go:276] 0 containers: []
	W0425 20:06:18.435204   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:18.435211   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:18.435224   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:18.450871   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:18.450901   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:18.534501   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:18.534526   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:18.534538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:18.616979   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:18.617015   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:18.663568   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:18.663598   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.217744   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:21.235862   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:21.235955   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:21.288966   72712 cri.go:89] found id: ""
	I0425 20:06:21.288996   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.289005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:21.289014   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:21.289075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:21.362068   72712 cri.go:89] found id: ""
	I0425 20:06:21.362092   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.362101   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:21.362108   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:21.362168   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:21.416870   72712 cri.go:89] found id: ""
	I0425 20:06:21.416894   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.416901   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:21.416907   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:21.416956   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:21.461465   72712 cri.go:89] found id: ""
	I0425 20:06:21.461495   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.461503   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:21.461508   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:21.461570   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:21.499985   72712 cri.go:89] found id: ""
	I0425 20:06:21.500014   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.500025   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:21.500032   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:21.500081   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:21.543725   72712 cri.go:89] found id: ""
	I0425 20:06:21.543764   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.543776   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:21.543784   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:21.543841   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:21.586535   72712 cri.go:89] found id: ""
	I0425 20:06:21.586566   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.586578   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:21.586587   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:21.586644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:21.627885   72712 cri.go:89] found id: ""
	I0425 20:06:21.627912   72712 logs.go:276] 0 containers: []
	W0425 20:06:21.627921   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:21.627929   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:21.627942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:21.685973   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:21.686006   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:21.702529   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:21.702556   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:21.781634   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:21.781660   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:21.781673   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:21.862986   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:21.863027   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:19.372142   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.374479   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:21.918699   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.419088   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:23.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:25.883438   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:24.413547   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:24.428247   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:24.428323   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:24.468708   72712 cri.go:89] found id: ""
	I0425 20:06:24.468757   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.468768   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:24.468775   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:24.468836   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:24.507667   72712 cri.go:89] found id: ""
	I0425 20:06:24.507694   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.507702   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:24.507708   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:24.507769   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:24.548537   72712 cri.go:89] found id: ""
	I0425 20:06:24.548562   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.548570   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:24.548576   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:24.548625   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:24.591240   72712 cri.go:89] found id: ""
	I0425 20:06:24.591264   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.591272   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:24.591280   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:24.591325   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:24.631530   72712 cri.go:89] found id: ""
	I0425 20:06:24.631557   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.631568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:24.631575   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:24.631642   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:24.672878   72712 cri.go:89] found id: ""
	I0425 20:06:24.672903   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.672911   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:24.672916   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:24.672960   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:24.716168   72712 cri.go:89] found id: ""
	I0425 20:06:24.716193   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.716201   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:24.716206   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:24.716256   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:24.758061   72712 cri.go:89] found id: ""
	I0425 20:06:24.758098   72712 logs.go:276] 0 containers: []
	W0425 20:06:24.758110   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:24.758122   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:24.758135   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:24.839866   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:24.839900   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:24.889288   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:24.889380   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:24.946445   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:24.946488   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:24.963093   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:24.963126   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:25.044921   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:23.874297   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.372055   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.375436   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:26.916503   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:28.916669   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.887709   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.384645   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:27.545838   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:27.562659   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:27.562717   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:27.606462   72712 cri.go:89] found id: ""
	I0425 20:06:27.606491   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.606501   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:27.606509   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:27.606567   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:27.650475   72712 cri.go:89] found id: ""
	I0425 20:06:27.650505   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.650517   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:27.650524   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:27.650583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:27.695163   72712 cri.go:89] found id: ""
	I0425 20:06:27.695190   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.695201   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:27.695208   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:27.695265   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:27.741798   72712 cri.go:89] found id: ""
	I0425 20:06:27.741832   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.741842   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:27.741849   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:27.741904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:27.784146   72712 cri.go:89] found id: ""
	I0425 20:06:27.784175   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.784185   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:27.784193   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:27.784253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:27.827179   72712 cri.go:89] found id: ""
	I0425 20:06:27.827213   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.827225   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:27.827234   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:27.827298   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:27.872941   72712 cri.go:89] found id: ""
	I0425 20:06:27.872961   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.872980   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:27.872985   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:27.873040   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:27.917920   72712 cri.go:89] found id: ""
	I0425 20:06:27.917949   72712 logs.go:276] 0 containers: []
	W0425 20:06:27.917959   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:27.917970   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:27.917985   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:27.971411   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:27.971455   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:27.988704   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:27.988743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:28.064208   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:28.064229   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:28.064242   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:28.147388   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:28.147427   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.694349   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:30.708595   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:30.708671   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:30.752963   72712 cri.go:89] found id: ""
	I0425 20:06:30.752994   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.753005   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:30.753012   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:30.753073   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:30.795453   72712 cri.go:89] found id: ""
	I0425 20:06:30.795488   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.795498   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:30.795507   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:30.795574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:30.838945   72712 cri.go:89] found id: ""
	I0425 20:06:30.838970   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.838978   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:30.838984   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:30.839042   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:30.886128   72712 cri.go:89] found id: ""
	I0425 20:06:30.886160   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.886170   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:30.886178   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:30.886255   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:30.927773   72712 cri.go:89] found id: ""
	I0425 20:06:30.927805   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.927819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:30.927827   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:30.927893   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:30.968628   72712 cri.go:89] found id: ""
	I0425 20:06:30.968660   72712 logs.go:276] 0 containers: []
	W0425 20:06:30.968672   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:30.968680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:30.968743   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:31.014590   72712 cri.go:89] found id: ""
	I0425 20:06:31.014616   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.014627   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:31.014634   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:31.014697   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:31.053236   72712 cri.go:89] found id: ""
	I0425 20:06:31.053262   72712 logs.go:276] 0 containers: []
	W0425 20:06:31.053274   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:31.053285   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:31.053301   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:31.107797   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:31.107834   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:31.123675   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:31.123702   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:31.201180   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:31.201204   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:31.201215   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:31.289474   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:31.289512   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:30.873981   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.373083   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:30.918572   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.420043   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:35.421384   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:32.883164   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:34.883697   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:33.840828   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:33.857736   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:33.857795   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:33.898621   72712 cri.go:89] found id: ""
	I0425 20:06:33.898647   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.898658   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:33.898665   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:33.898727   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:33.939211   72712 cri.go:89] found id: ""
	I0425 20:06:33.939234   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.939245   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:33.939250   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:33.939305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:33.981872   72712 cri.go:89] found id: ""
	I0425 20:06:33.981896   72712 logs.go:276] 0 containers: []
	W0425 20:06:33.981903   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:33.981909   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:33.981965   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:34.027570   72712 cri.go:89] found id: ""
	I0425 20:06:34.027597   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.027609   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:34.027617   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:34.027675   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:34.072544   72712 cri.go:89] found id: ""
	I0425 20:06:34.072570   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.072586   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:34.072594   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:34.072674   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:34.119326   72712 cri.go:89] found id: ""
	I0425 20:06:34.119349   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.119358   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:34.119366   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:34.119423   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:34.169618   72712 cri.go:89] found id: ""
	I0425 20:06:34.169642   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.169650   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:34.169655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:34.169705   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:34.213570   72712 cri.go:89] found id: ""
	I0425 20:06:34.213593   72712 logs.go:276] 0 containers: []
	W0425 20:06:34.213601   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:34.213609   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:34.213621   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:34.255722   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:34.255756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:34.311113   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:34.311147   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:34.326869   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:34.326897   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:34.399765   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:34.399788   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:34.399801   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:36.986610   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:37.003090   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:37.003163   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:37.045929   72712 cri.go:89] found id: ""
	I0425 20:06:37.045956   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.045964   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:37.045969   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:37.046022   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:37.086835   72712 cri.go:89] found id: ""
	I0425 20:06:37.086868   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.086879   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:37.086885   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:37.086937   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:37.127454   72712 cri.go:89] found id: ""
	I0425 20:06:37.127479   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.127488   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:37.127494   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:37.127551   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:37.168878   72712 cri.go:89] found id: ""
	I0425 20:06:37.168904   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.168917   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:37.168924   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:37.168986   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:37.208859   72712 cri.go:89] found id: ""
	I0425 20:06:37.208889   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.208901   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:37.208914   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:37.208970   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:37.250407   72712 cri.go:89] found id: ""
	I0425 20:06:37.250439   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.250452   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:37.250467   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:37.250536   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:37.291004   72712 cri.go:89] found id: ""
	I0425 20:06:37.291040   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.291054   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:37.291063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:37.291125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:37.335573   72712 cri.go:89] found id: ""
	I0425 20:06:37.335597   72712 logs.go:276] 0 containers: []
	W0425 20:06:37.335608   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:37.335619   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:37.335635   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:35.873065   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.371805   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.426152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:39.916340   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:36.884518   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:38.884859   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:37.392773   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:37.392810   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:37.408311   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:37.408343   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:37.491376   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:37.491402   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:37.491416   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:37.574559   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:37.574600   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.125241   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:40.142254   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:40.142347   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:40.186859   72712 cri.go:89] found id: ""
	I0425 20:06:40.186893   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.186904   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:40.186911   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:40.186972   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:40.229247   72712 cri.go:89] found id: ""
	I0425 20:06:40.229275   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.229288   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:40.229295   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:40.229361   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:40.268853   72712 cri.go:89] found id: ""
	I0425 20:06:40.268879   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.268890   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:40.268897   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:40.268959   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:40.307621   72712 cri.go:89] found id: ""
	I0425 20:06:40.307650   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.307669   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:40.307677   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:40.307732   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:40.351448   72712 cri.go:89] found id: ""
	I0425 20:06:40.351472   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.351484   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:40.351492   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:40.351548   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:40.396771   72712 cri.go:89] found id: ""
	I0425 20:06:40.396804   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.396815   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:40.396824   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:40.396890   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:40.443605   72712 cri.go:89] found id: ""
	I0425 20:06:40.443634   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.443642   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:40.443647   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:40.443694   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:40.495496   72712 cri.go:89] found id: ""
	I0425 20:06:40.495525   72712 logs.go:276] 0 containers: []
	W0425 20:06:40.495536   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:40.495548   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:40.495563   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:40.539428   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:40.539457   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:40.596259   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:40.596305   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:40.613140   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:40.613167   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:40.701768   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:40.701793   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:40.701805   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:40.372225   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:42.373541   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.916879   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.917783   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:41.386292   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.885441   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:43.294502   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:43.310041   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:43.310113   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:43.351841   72712 cri.go:89] found id: ""
	I0425 20:06:43.351864   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.351872   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:43.351877   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:43.351924   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:43.395467   72712 cri.go:89] found id: ""
	I0425 20:06:43.395497   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.395509   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:43.395516   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:43.395576   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:43.437256   72712 cri.go:89] found id: ""
	I0425 20:06:43.437354   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.437375   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:43.437384   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:43.437465   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:43.480744   72712 cri.go:89] found id: ""
	I0425 20:06:43.480772   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.480783   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:43.480791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:43.480839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:43.519916   72712 cri.go:89] found id: ""
	I0425 20:06:43.519951   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.519961   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:43.519975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:43.520039   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:43.557861   72712 cri.go:89] found id: ""
	I0425 20:06:43.557890   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.557901   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:43.557910   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:43.557968   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:43.594423   72712 cri.go:89] found id: ""
	I0425 20:06:43.594449   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.594458   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:43.594464   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:43.594512   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:43.632227   72712 cri.go:89] found id: ""
	I0425 20:06:43.632253   72712 logs.go:276] 0 containers: []
	W0425 20:06:43.632262   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:43.632270   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:43.632281   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:43.688307   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:43.688336   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:43.703382   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:43.703407   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:43.782073   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:43.782093   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:43.782109   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:43.872811   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:43.872842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:46.420420   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:46.435110   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:46.435174   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:46.474019   72712 cri.go:89] found id: ""
	I0425 20:06:46.474044   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.474054   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:46.474067   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:46.474125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:46.517053   72712 cri.go:89] found id: ""
	I0425 20:06:46.517078   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.517088   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:46.517096   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:46.517150   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:46.560934   72712 cri.go:89] found id: ""
	I0425 20:06:46.560963   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.560972   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:46.560977   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:46.561030   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:46.605969   72712 cri.go:89] found id: ""
	I0425 20:06:46.605997   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.606007   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:46.606012   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:46.606061   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:46.647025   72712 cri.go:89] found id: ""
	I0425 20:06:46.647049   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.647058   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:46.647063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:46.647118   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:46.686931   72712 cri.go:89] found id: ""
	I0425 20:06:46.686956   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.686966   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:46.686975   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:46.687053   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:46.727183   72712 cri.go:89] found id: ""
	I0425 20:06:46.727207   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.727216   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:46.727224   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:46.727277   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:46.768030   72712 cri.go:89] found id: ""
	I0425 20:06:46.768059   72712 logs.go:276] 0 containers: []
	W0425 20:06:46.768073   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:46.768085   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:46.768105   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:46.823400   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:46.823439   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:46.838443   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:46.838468   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:46.919509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:46.919527   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:46.919538   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:46.996250   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:46.996284   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:44.873706   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.874042   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:45.918619   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.418507   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:46.384559   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:48.884184   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.885081   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:49.542696   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:49.557346   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:49.557444   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:49.595195   72712 cri.go:89] found id: ""
	I0425 20:06:49.595220   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.595231   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:49.595238   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:49.595305   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:49.641324   72712 cri.go:89] found id: ""
	I0425 20:06:49.641354   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.641365   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:49.641373   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:49.641426   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:49.681510   72712 cri.go:89] found id: ""
	I0425 20:06:49.681540   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.681552   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:49.681559   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:49.681620   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:49.721482   72712 cri.go:89] found id: ""
	I0425 20:06:49.721509   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.721518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:49.721525   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:49.721581   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:49.762682   72712 cri.go:89] found id: ""
	I0425 20:06:49.762710   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.762723   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:49.762731   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:49.762793   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:49.801892   72712 cri.go:89] found id: ""
	I0425 20:06:49.801920   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.801932   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:49.801943   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:49.802002   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:49.840347   72712 cri.go:89] found id: ""
	I0425 20:06:49.840376   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.840387   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:49.840395   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:49.840458   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:49.898486   72712 cri.go:89] found id: ""
	I0425 20:06:49.898516   72712 logs.go:276] 0 containers: []
	W0425 20:06:49.898527   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:49.898536   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:49.898547   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:49.952735   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:49.952775   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:49.967986   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:49.968018   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:50.048003   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:50.048024   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:50.048040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:50.126062   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:50.126098   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:49.373031   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:51.873671   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:50.917641   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.418642   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.421542   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:53.384273   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:55.384393   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:52.679721   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:52.695636   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:52.695700   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:52.738329   72712 cri.go:89] found id: ""
	I0425 20:06:52.738359   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.738368   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:52.738374   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:52.738420   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:52.779388   72712 cri.go:89] found id: ""
	I0425 20:06:52.779418   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.779426   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:52.779433   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:52.779496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:52.821105   72712 cri.go:89] found id: ""
	I0425 20:06:52.821137   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.821149   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:52.821168   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:52.821231   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:52.861781   72712 cri.go:89] found id: ""
	I0425 20:06:52.861817   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.861825   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:52.861831   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:52.861885   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:52.904602   72712 cri.go:89] found id: ""
	I0425 20:06:52.904633   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.904644   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:52.904651   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:52.904712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:52.951137   72712 cri.go:89] found id: ""
	I0425 20:06:52.951174   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.951183   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:52.951188   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:52.951234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:52.994199   72712 cri.go:89] found id: ""
	I0425 20:06:52.994249   72712 logs.go:276] 0 containers: []
	W0425 20:06:52.994257   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:52.994262   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:52.994315   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:53.031997   72712 cri.go:89] found id: ""
	I0425 20:06:53.032020   72712 logs.go:276] 0 containers: []
	W0425 20:06:53.032027   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:53.032035   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:53.032046   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:53.111351   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:53.111383   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:53.162470   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:53.162504   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:53.217188   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:53.217223   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:53.233071   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:53.233100   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:53.308983   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:55.809162   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:55.825185   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:55.825259   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:55.865963   72712 cri.go:89] found id: ""
	I0425 20:06:55.865989   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.866001   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:55.866009   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:55.866060   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:55.920565   72712 cri.go:89] found id: ""
	I0425 20:06:55.920601   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.920612   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:55.920620   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:55.920677   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:55.962643   72712 cri.go:89] found id: ""
	I0425 20:06:55.962669   72712 logs.go:276] 0 containers: []
	W0425 20:06:55.962677   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:55.962684   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:55.962738   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:56.000737   72712 cri.go:89] found id: ""
	I0425 20:06:56.000764   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.000773   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:56.000782   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:56.000828   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:56.042226   72712 cri.go:89] found id: ""
	I0425 20:06:56.042251   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.042259   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:56.042265   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:56.042316   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:56.080765   72712 cri.go:89] found id: ""
	I0425 20:06:56.080788   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.080798   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:56.080810   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:56.080869   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:56.119563   72712 cri.go:89] found id: ""
	I0425 20:06:56.119590   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.119602   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:56.119608   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:56.119667   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:56.160136   72712 cri.go:89] found id: ""
	I0425 20:06:56.160162   72712 logs.go:276] 0 containers: []
	W0425 20:06:56.160170   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:56.160179   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:56.160193   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:56.213506   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:56.213539   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:56.232121   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:56.232150   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:56.336606   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:06:56.336629   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:56.336640   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:56.426867   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:56.426908   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:54.374441   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:56.374847   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.916077   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.916521   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:57.384779   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:59.884281   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:06:58.975395   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:06:58.991064   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:06:58.991125   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:06:59.031157   72712 cri.go:89] found id: ""
	I0425 20:06:59.031179   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.031190   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:06:59.031197   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:06:59.031253   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:06:59.071893   72712 cri.go:89] found id: ""
	I0425 20:06:59.071923   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.071931   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:06:59.071937   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:06:59.071998   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:06:59.114714   72712 cri.go:89] found id: ""
	I0425 20:06:59.114749   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.114760   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:06:59.114768   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:06:59.114840   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:06:59.159482   72712 cri.go:89] found id: ""
	I0425 20:06:59.159510   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.159518   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:06:59.159523   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:06:59.159575   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:59.201218   72712 cri.go:89] found id: ""
	I0425 20:06:59.201245   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.201253   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:06:59.201263   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:06:59.201312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:06:59.247277   72712 cri.go:89] found id: ""
	I0425 20:06:59.247305   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.247316   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:06:59.247324   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:06:59.247379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:06:59.286713   72712 cri.go:89] found id: ""
	I0425 20:06:59.286738   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.286746   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:06:59.286751   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:06:59.286804   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:06:59.332263   72712 cri.go:89] found id: ""
	I0425 20:06:59.332296   72712 logs.go:276] 0 containers: []
	W0425 20:06:59.332320   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:06:59.332332   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:06:59.332346   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:06:59.416446   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:06:59.416477   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:06:59.462125   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:06:59.462166   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:06:59.514881   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:06:59.514907   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:06:59.530109   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:06:59.530134   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:06:59.605820   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.106478   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:02.124859   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:02.124934   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:02.180491   72712 cri.go:89] found id: ""
	I0425 20:07:02.180526   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.180537   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:02.180545   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:02.180601   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:02.237075   72712 cri.go:89] found id: ""
	I0425 20:07:02.237104   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.237118   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:02.237126   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:02.237190   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:02.295104   72712 cri.go:89] found id: ""
	I0425 20:07:02.295129   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.295140   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:02.295148   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:02.295210   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:02.335392   72712 cri.go:89] found id: ""
	I0425 20:07:02.335418   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.335428   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:02.335435   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:02.335496   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:06:58.871748   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.372545   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.373424   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.917135   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:03.917504   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:01.885744   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:04.385280   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:02.376964   72712 cri.go:89] found id: ""
	I0425 20:07:02.376990   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.377002   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:02.377009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:02.377066   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:02.415460   72712 cri.go:89] found id: ""
	I0425 20:07:02.415484   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.415491   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:02.415496   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:02.415550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:02.461946   72712 cri.go:89] found id: ""
	I0425 20:07:02.461972   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.461993   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:02.462009   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:02.462075   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:02.502829   72712 cri.go:89] found id: ""
	I0425 20:07:02.502851   72712 logs.go:276] 0 containers: []
	W0425 20:07:02.502858   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:02.502866   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:02.502878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:02.558264   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:02.558296   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:02.574175   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:02.574225   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:02.649363   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:02.649389   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:02.649404   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:02.730528   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:02.730560   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.276648   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:05.292055   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:05.292121   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:05.332849   72712 cri.go:89] found id: ""
	I0425 20:07:05.332874   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.332884   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:05.332892   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:05.332954   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:05.376446   72712 cri.go:89] found id: ""
	I0425 20:07:05.376475   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.376487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:05.376494   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:05.376556   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:05.418635   72712 cri.go:89] found id: ""
	I0425 20:07:05.418664   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.418675   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:05.418682   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:05.418745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:05.459082   72712 cri.go:89] found id: ""
	I0425 20:07:05.459113   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.459123   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:05.459128   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:05.459175   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:05.498473   72712 cri.go:89] found id: ""
	I0425 20:07:05.498502   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.498514   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:05.498521   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:05.498578   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:05.543121   72712 cri.go:89] found id: ""
	I0425 20:07:05.543150   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.543159   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:05.543164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:05.543211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:05.585722   72712 cri.go:89] found id: ""
	I0425 20:07:05.585748   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.585758   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:05.585766   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:05.585826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:05.629614   72712 cri.go:89] found id: ""
	I0425 20:07:05.629647   72712 logs.go:276] 0 containers: []
	W0425 20:07:05.629661   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:05.629671   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:05.629685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:05.683974   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:05.684007   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:05.700651   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:05.700685   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:05.782097   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:05.782127   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:05.782142   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:05.863881   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:05.863918   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:05.374553   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:07.872114   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.417080   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.417436   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:10.418259   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:06.885509   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:09.383078   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:08.412898   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:08.428152   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:08.428206   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:08.468403   72712 cri.go:89] found id: ""
	I0425 20:07:08.468441   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.468455   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:08.468464   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:08.468529   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:08.511246   72712 cri.go:89] found id: ""
	I0425 20:07:08.511285   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.511297   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:08.511304   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:08.511363   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:08.553121   72712 cri.go:89] found id: ""
	I0425 20:07:08.553148   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.553155   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:08.553161   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:08.553214   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:08.589723   72712 cri.go:89] found id: ""
	I0425 20:07:08.589745   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.589755   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:08.589762   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:08.589826   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:08.629502   72712 cri.go:89] found id: ""
	I0425 20:07:08.629525   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.629533   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:08.629538   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:08.629591   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:08.677107   72712 cri.go:89] found id: ""
	I0425 20:07:08.677144   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.677153   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:08.677164   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:08.677212   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:08.716501   72712 cri.go:89] found id: ""
	I0425 20:07:08.716531   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.716542   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:08.716550   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:08.716611   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:08.763473   72712 cri.go:89] found id: ""
	I0425 20:07:08.763503   72712 logs.go:276] 0 containers: []
	W0425 20:07:08.763515   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:08.763526   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:08.763543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:08.848961   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:08.848985   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:08.849000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:08.945851   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:08.945890   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:08.989429   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:08.989460   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:09.042721   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:09.042756   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.559400   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:11.575100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:11.575180   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:11.613246   72712 cri.go:89] found id: ""
	I0425 20:07:11.613271   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.613284   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:11.613290   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:11.613351   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:11.655158   72712 cri.go:89] found id: ""
	I0425 20:07:11.655189   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.655200   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:11.655208   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:11.655266   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:11.695122   72712 cri.go:89] found id: ""
	I0425 20:07:11.695144   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.695151   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:11.695156   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:11.695205   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:11.735578   72712 cri.go:89] found id: ""
	I0425 20:07:11.735604   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.735615   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:11.735621   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:11.735680   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:11.774750   72712 cri.go:89] found id: ""
	I0425 20:07:11.774785   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.774795   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:11.774803   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:11.774855   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:11.814878   72712 cri.go:89] found id: ""
	I0425 20:07:11.814908   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.814920   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:11.814939   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:11.815000   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:11.853262   72712 cri.go:89] found id: ""
	I0425 20:07:11.853295   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.853306   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:11.853313   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:11.853379   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:11.897291   72712 cri.go:89] found id: ""
	I0425 20:07:11.897314   72712 logs.go:276] 0 containers: []
	W0425 20:07:11.897324   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:11.897333   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:11.897348   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:11.956913   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:11.956945   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:11.973787   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:11.973821   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:12.055801   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:12.055826   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:12.055842   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:12.140238   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:12.140270   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:10.372634   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.374037   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:12.418299   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.919967   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:11.383994   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:13.384162   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:15.884319   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:14.685296   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:14.699655   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:14.699740   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:14.741907   72712 cri.go:89] found id: ""
	I0425 20:07:14.741936   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.741947   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:14.741955   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:14.742017   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:14.786457   72712 cri.go:89] found id: ""
	I0425 20:07:14.786479   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.786487   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:14.786493   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:14.786537   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:14.825010   72712 cri.go:89] found id: ""
	I0425 20:07:14.825042   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.825055   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:14.825063   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:14.825124   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:14.874834   72712 cri.go:89] found id: ""
	I0425 20:07:14.874856   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.874867   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:14.874875   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:14.874933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:14.914636   72712 cri.go:89] found id: ""
	I0425 20:07:14.914674   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.914685   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:14.914693   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:14.914752   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:14.959327   72712 cri.go:89] found id: ""
	I0425 20:07:14.959356   72712 logs.go:276] 0 containers: []
	W0425 20:07:14.959365   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:14.959372   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:14.959425   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:15.000637   72712 cri.go:89] found id: ""
	I0425 20:07:15.000666   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.000674   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:15.000680   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:15.000728   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:15.040497   72712 cri.go:89] found id: ""
	I0425 20:07:15.040523   72712 logs.go:276] 0 containers: []
	W0425 20:07:15.040531   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:15.040539   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:15.040550   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:15.120206   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:15.120240   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:15.168292   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:15.168324   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:15.222133   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:15.222164   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:15.237719   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:15.237746   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:15.323404   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:14.872743   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.375231   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.420149   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:19.420277   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:18.384902   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:20.883469   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:17.823552   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:17.838837   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:17.838911   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:17.880547   72712 cri.go:89] found id: ""
	I0425 20:07:17.880584   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.880595   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:17.880608   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:17.880669   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:17.929700   72712 cri.go:89] found id: ""
	I0425 20:07:17.929730   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.929742   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:17.929797   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:17.929861   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:17.974057   72712 cri.go:89] found id: ""
	I0425 20:07:17.974081   72712 logs.go:276] 0 containers: []
	W0425 20:07:17.974088   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:17.974094   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:17.974142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:18.013173   72712 cri.go:89] found id: ""
	I0425 20:07:18.013200   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.013209   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:18.013215   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:18.013267   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:18.053525   72712 cri.go:89] found id: ""
	I0425 20:07:18.053557   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.053568   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:18.053580   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:18.053644   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:18.095972   72712 cri.go:89] found id: ""
	I0425 20:07:18.096004   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.096016   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:18.096024   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:18.096089   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:18.136792   72712 cri.go:89] found id: ""
	I0425 20:07:18.136823   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.136834   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:18.136842   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:18.136904   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:18.176562   72712 cri.go:89] found id: ""
	I0425 20:07:18.176594   72712 logs.go:276] 0 containers: []
	W0425 20:07:18.176605   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:18.176619   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:18.176634   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:18.254402   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:18.254440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:18.298075   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:18.298112   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:18.356091   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:18.356124   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:18.373788   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:18.373822   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:18.452545   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:20.952752   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:20.972054   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:20.972133   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:21.015572   72712 cri.go:89] found id: ""
	I0425 20:07:21.015602   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.015613   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:21.015621   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:21.015689   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:21.053313   72712 cri.go:89] found id: ""
	I0425 20:07:21.053342   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.053352   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:21.053359   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:21.053422   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:21.090343   72712 cri.go:89] found id: ""
	I0425 20:07:21.090373   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.090384   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:21.090391   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:21.090472   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:21.127148   72712 cri.go:89] found id: ""
	I0425 20:07:21.127174   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.127184   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:21.127192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:21.127258   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:21.167175   72712 cri.go:89] found id: ""
	I0425 20:07:21.167199   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.167207   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:21.167212   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:21.167263   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:21.212740   72712 cri.go:89] found id: ""
	I0425 20:07:21.212771   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.212783   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:21.212791   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:21.212856   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:21.250751   72712 cri.go:89] found id: ""
	I0425 20:07:21.250774   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.250782   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:21.250788   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:21.250833   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:21.292387   72712 cri.go:89] found id: ""
	I0425 20:07:21.292414   72712 logs.go:276] 0 containers: []
	W0425 20:07:21.292426   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:21.292436   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:21.292451   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:21.337695   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:21.337726   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:21.395479   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:21.395520   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:21.411538   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:21.411564   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:21.493248   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:21.493270   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:21.493282   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:19.873680   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.372461   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:21.421770   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:23.426808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:22.883520   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.884554   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:24.076755   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:24.093549   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:24.093624   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:24.135660   72712 cri.go:89] found id: ""
	I0425 20:07:24.135686   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.135694   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:24.135705   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:24.135784   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:24.179778   72712 cri.go:89] found id: ""
	I0425 20:07:24.179799   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.179807   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:24.179824   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:24.179883   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.226745   72712 cri.go:89] found id: ""
	I0425 20:07:24.226771   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.226780   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:24.226785   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:24.226839   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:24.273302   72712 cri.go:89] found id: ""
	I0425 20:07:24.273327   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.273347   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:24.273354   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:24.273421   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:24.314117   72712 cri.go:89] found id: ""
	I0425 20:07:24.314149   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.314160   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:24.314167   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:24.314247   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:24.353144   72712 cri.go:89] found id: ""
	I0425 20:07:24.353173   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.353184   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:24.353192   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:24.353292   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:24.395899   72712 cri.go:89] found id: ""
	I0425 20:07:24.395925   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.395933   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:24.395938   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:24.395988   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:24.444470   72712 cri.go:89] found id: ""
	I0425 20:07:24.444503   72712 logs.go:276] 0 containers: []
	W0425 20:07:24.444514   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:24.444525   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:24.444540   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:24.499845   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:24.499876   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:24.517421   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:24.517449   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:24.596509   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:24.596530   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:24.596543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:24.710844   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:24.710878   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.259541   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:27.275551   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:27.275609   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:27.314610   72712 cri.go:89] found id: ""
	I0425 20:07:27.314640   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.314651   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:27.314656   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:27.314712   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:27.350100   72712 cri.go:89] found id: ""
	I0425 20:07:27.350132   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.350151   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:27.350158   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:27.350226   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:24.373886   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:26.873863   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:25.917794   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:28.417757   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:30.419922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.384565   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:29.385043   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:27.390197   72712 cri.go:89] found id: ""
	I0425 20:07:27.390238   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.390249   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:27.390257   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:27.390312   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:27.431936   72712 cri.go:89] found id: ""
	I0425 20:07:27.431961   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.431973   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:27.431980   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:27.432038   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:27.469175   72712 cri.go:89] found id: ""
	I0425 20:07:27.469204   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.469212   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:27.469218   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:27.469276   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:27.509385   72712 cri.go:89] found id: ""
	I0425 20:07:27.509416   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.509428   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:27.509436   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:27.509503   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:27.548997   72712 cri.go:89] found id: ""
	I0425 20:07:27.549034   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.549045   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:27.549052   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:27.549111   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:27.588925   72712 cri.go:89] found id: ""
	I0425 20:07:27.588959   72712 logs.go:276] 0 containers: []
	W0425 20:07:27.588973   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:27.588985   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:27.589000   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:27.635005   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:27.635040   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:27.686587   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:27.686617   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:27.702913   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:27.702942   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:27.775525   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:27.775551   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:27.775562   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.352358   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:30.367016   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:30.367088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:30.410878   72712 cri.go:89] found id: ""
	I0425 20:07:30.410906   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.410917   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:30.410927   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:30.410985   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:30.456150   72712 cri.go:89] found id: ""
	I0425 20:07:30.456173   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.456181   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:30.456186   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:30.456234   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:30.495409   72712 cri.go:89] found id: ""
	I0425 20:07:30.495439   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.495450   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:30.495458   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:30.495516   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:30.535863   72712 cri.go:89] found id: ""
	I0425 20:07:30.535895   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.535906   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:30.535912   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:30.535971   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:30.573772   72712 cri.go:89] found id: ""
	I0425 20:07:30.573808   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.573819   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:30.573826   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:30.573892   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:30.626310   72712 cri.go:89] found id: ""
	I0425 20:07:30.626350   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.626362   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:30.626376   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:30.626438   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:30.666302   72712 cri.go:89] found id: ""
	I0425 20:07:30.666332   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.666343   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:30.666350   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:30.666413   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:30.703478   72712 cri.go:89] found id: ""
	I0425 20:07:30.703507   72712 logs.go:276] 0 containers: []
	W0425 20:07:30.703519   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:30.703529   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:30.703543   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:30.756532   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:30.756566   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:30.772128   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:30.772158   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:30.853701   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:30.853728   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:30.853743   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:30.935879   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:30.935917   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:29.372219   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.872125   72220 pod_ready.go:102] pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:32.865998   72220 pod_ready.go:81] duration metric: took 4m0.000690329s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:32.866038   72220 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6n2gd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0425 20:07:32.866057   72220 pod_ready.go:38] duration metric: took 4m13.047288103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:32.866091   72220 kubeadm.go:591] duration metric: took 4m22.882679222s to restartPrimaryControlPlane
	W0425 20:07:32.866150   72220 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:32.866182   72220 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:32.917319   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:35.421922   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:31.886418   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.894776   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:33.483702   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:33.498238   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:33.498310   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:33.545696   72712 cri.go:89] found id: ""
	I0425 20:07:33.545723   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.545731   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:33.545737   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:33.545791   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:33.590808   72712 cri.go:89] found id: ""
	I0425 20:07:33.590837   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.590849   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:33.590857   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:33.590919   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:33.634529   72712 cri.go:89] found id: ""
	I0425 20:07:33.634554   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.634562   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:33.634572   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:33.634640   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:33.679055   72712 cri.go:89] found id: ""
	I0425 20:07:33.679082   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.679093   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:33.679100   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:33.679160   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:33.720653   72712 cri.go:89] found id: ""
	I0425 20:07:33.720686   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.720698   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:33.720706   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:33.720777   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:33.766163   72712 cri.go:89] found id: ""
	I0425 20:07:33.766221   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.766233   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:33.766241   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:33.766314   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:33.810804   72712 cri.go:89] found id: ""
	I0425 20:07:33.810830   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.810839   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:33.810844   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:33.810908   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:33.858109   72712 cri.go:89] found id: ""
	I0425 20:07:33.858140   72712 logs.go:276] 0 containers: []
	W0425 20:07:33.858152   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:33.858162   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:33.858176   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:33.926296   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:33.926333   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:33.944220   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:33.944249   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:34.042119   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:34.042191   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:34.042234   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:34.143694   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:34.143732   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:36.691575   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:36.710408   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:36.710490   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:36.760097   72712 cri.go:89] found id: ""
	I0425 20:07:36.760135   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.760144   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:36.760150   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:36.760208   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:36.801508   72712 cri.go:89] found id: ""
	I0425 20:07:36.801532   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.801541   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:36.801546   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:36.801602   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:36.842293   72712 cri.go:89] found id: ""
	I0425 20:07:36.842328   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.842340   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:36.842355   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:36.842418   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:36.884101   72712 cri.go:89] found id: ""
	I0425 20:07:36.884131   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.884141   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:36.884149   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:36.884211   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:36.925007   72712 cri.go:89] found id: ""
	I0425 20:07:36.925032   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.925039   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:36.925045   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:36.925109   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:36.964975   72712 cri.go:89] found id: ""
	I0425 20:07:36.965009   72712 logs.go:276] 0 containers: []
	W0425 20:07:36.965020   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:36.965028   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:36.965088   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:37.030956   72712 cri.go:89] found id: ""
	I0425 20:07:37.030987   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.030999   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:37.031007   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:37.031080   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:37.105919   72712 cri.go:89] found id: ""
	I0425 20:07:37.105946   72712 logs.go:276] 0 containers: []
	W0425 20:07:37.105956   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:37.105967   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:37.105983   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:37.196376   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:37.196415   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:37.240296   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:37.240334   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:37.304336   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:37.304371   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:37.323146   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:37.323184   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:37.918245   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.418671   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:36.384384   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:38.387656   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:40.883973   72304 pod_ready.go:102] pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace has status "Ready":"False"
	W0425 20:07:37.414563   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:39.915087   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:39.930987   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:39.931068   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:39.967641   72712 cri.go:89] found id: ""
	I0425 20:07:39.967682   72712 logs.go:276] 0 containers: []
	W0425 20:07:39.967693   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:39.967698   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:39.967755   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:40.009924   72712 cri.go:89] found id: ""
	I0425 20:07:40.009951   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.009959   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:40.009969   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:40.010019   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:40.049644   72712 cri.go:89] found id: ""
	I0425 20:07:40.049675   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.049689   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:40.049697   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:40.049759   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:40.090487   72712 cri.go:89] found id: ""
	I0425 20:07:40.090509   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.090519   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:40.090524   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:40.090583   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:40.137634   72712 cri.go:89] found id: ""
	I0425 20:07:40.137664   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.137674   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:40.137681   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:40.137745   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:40.174832   72712 cri.go:89] found id: ""
	I0425 20:07:40.174863   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.174874   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:40.174882   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:40.174947   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:40.212559   72712 cri.go:89] found id: ""
	I0425 20:07:40.212585   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.212593   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:40.212598   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:40.212687   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:40.253459   72712 cri.go:89] found id: ""
	I0425 20:07:40.253494   72712 logs.go:276] 0 containers: []
	W0425 20:07:40.253506   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:40.253518   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:40.253533   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:40.311253   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:40.311288   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:40.326693   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:40.326722   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:40.405792   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:40.405816   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:40.405831   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:40.486712   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:40.486749   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:42.419025   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:44.916387   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:41.387375   72304 pod_ready.go:81] duration metric: took 4m0.010411263s for pod "metrics-server-569cc877fc-cphk6" in "kube-system" namespace to be "Ready" ...
	E0425 20:07:41.387396   72304 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:07:41.387402   72304 pod_ready.go:38] duration metric: took 4m6.083068398s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:07:41.387414   72304 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:07:41.387441   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:41.387498   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:41.459873   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:41.459899   72304 cri.go:89] found id: ""
	I0425 20:07:41.459907   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:41.459960   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.465470   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:41.465534   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:41.509504   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:41.509523   72304 cri.go:89] found id: ""
	I0425 20:07:41.509530   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:41.509584   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.515012   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:41.515070   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:41.562701   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:41.562727   72304 cri.go:89] found id: ""
	I0425 20:07:41.562737   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:41.562792   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.567856   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:41.567928   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:41.618411   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:41.618441   72304 cri.go:89] found id: ""
	I0425 20:07:41.618452   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:41.618510   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.625757   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:41.625826   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:41.672707   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:41.672734   72304 cri.go:89] found id: ""
	I0425 20:07:41.672741   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:41.672785   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.678040   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:41.678119   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:41.725172   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:41.725196   72304 cri.go:89] found id: ""
	I0425 20:07:41.725205   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:41.725264   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.730651   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:41.730718   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:41.777224   72304 cri.go:89] found id: ""
	I0425 20:07:41.777269   72304 logs.go:276] 0 containers: []
	W0425 20:07:41.777280   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:41.777290   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:41.777380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:41.821498   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:41.821524   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:41.821531   72304 cri.go:89] found id: ""
	I0425 20:07:41.821541   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:41.821599   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.827065   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:41.831900   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:41.831924   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:41.893198   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:41.893233   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:41.909141   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:41.909169   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:42.051260   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:42.051305   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:42.109173   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:42.109214   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:42.155862   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:42.155894   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:42.222430   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:42.222466   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:42.265323   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:42.265353   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:42.316534   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:42.316569   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:42.363543   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:42.363568   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:42.422389   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:42.422421   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:42.471230   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:42.471259   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.011223   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.011263   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:45.578411   72304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:45.597748   72304 api_server.go:72] duration metric: took 4m16.066757074s to wait for apiserver process to appear ...
	I0425 20:07:45.597777   72304 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:07:45.597813   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:45.597861   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:45.649452   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:45.649481   72304 cri.go:89] found id: ""
	I0425 20:07:45.649491   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:45.649534   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.654965   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:45.655023   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:45.701151   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:45.701177   72304 cri.go:89] found id: ""
	I0425 20:07:45.701186   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:45.701238   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.706702   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:45.706767   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:45.763142   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:45.763167   72304 cri.go:89] found id: ""
	I0425 20:07:45.763177   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:45.763220   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.768626   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:45.768684   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:45.816615   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:45.816648   72304 cri.go:89] found id: ""
	I0425 20:07:45.816656   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:45.816701   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.822714   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:45.822790   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:45.875652   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:45.875678   72304 cri.go:89] found id: ""
	I0425 20:07:45.875688   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:45.875737   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.881649   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:45.881719   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:45.930631   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:45.930656   72304 cri.go:89] found id: ""
	I0425 20:07:45.930666   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:45.930721   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:45.939712   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:45.939783   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:45.984646   72304 cri.go:89] found id: ""
	I0425 20:07:45.984684   72304 logs.go:276] 0 containers: []
	W0425 20:07:45.984693   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:45.984699   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:45.984754   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:46.029752   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.029777   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.029782   72304 cri.go:89] found id: ""
	I0425 20:07:46.029789   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:46.029845   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.035189   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:46.040479   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:46.040503   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:46.101469   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:46.101509   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:46.167362   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:46.167401   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:46.217732   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:46.217759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:46.264372   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:46.264404   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:43.037730   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:43.064471   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:43.064550   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:43.130075   72712 cri.go:89] found id: ""
	I0425 20:07:43.130111   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.130129   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:07:43.130136   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:43.130195   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:43.169628   72712 cri.go:89] found id: ""
	I0425 20:07:43.169663   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.169675   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:07:43.169682   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:43.169748   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:43.214845   72712 cri.go:89] found id: ""
	I0425 20:07:43.214869   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.214877   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:07:43.214883   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:43.214929   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:43.263047   72712 cri.go:89] found id: ""
	I0425 20:07:43.263069   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.263078   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:07:43.263083   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:43.263142   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:43.313179   72712 cri.go:89] found id: ""
	I0425 20:07:43.313213   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.313223   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:07:43.313231   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:43.313295   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:43.353440   72712 cri.go:89] found id: ""
	I0425 20:07:43.353468   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.353480   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:07:43.353488   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:43.353546   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:43.392261   72712 cri.go:89] found id: ""
	I0425 20:07:43.392288   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.392296   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:43.392321   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:07:43.392378   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:07:43.431111   72712 cri.go:89] found id: ""
	I0425 20:07:43.431139   72712 logs.go:276] 0 containers: []
	W0425 20:07:43.431147   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:07:43.431155   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:43.431165   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:43.485087   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:43.485120   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:43.501508   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:43.501536   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:07:43.586041   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:07:43.586073   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:43.586089   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:43.663194   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:07:43.663232   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:46.218461   72712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:07:46.233195   72712 kubeadm.go:591] duration metric: took 4m4.06065248s to restartPrimaryControlPlane
	W0425 20:07:46.233281   72712 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0425 20:07:46.233311   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:07:48.166680   72712 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.933342568s)
	I0425 20:07:48.166771   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:48.185391   72712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:07:48.198250   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:07:48.209825   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:07:48.209843   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:07:48.209897   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:07:48.220854   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:07:48.220909   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:07:48.231518   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:07:48.241515   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:07:48.241589   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:07:48.251764   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.261762   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:07:48.261813   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:07:48.271952   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:07:48.281914   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:07:48.281986   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:07:48.292879   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:07:48.372322   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:07:48.372460   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:07:48.529730   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:07:48.529854   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:07:48.529979   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:07:48.753171   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:07:48.755473   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:07:48.755590   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:07:48.755692   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:07:48.755809   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:07:48.755905   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:07:48.756132   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:07:48.756317   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:07:48.756867   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:07:48.757498   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:07:48.758073   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:07:48.758581   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:07:48.758745   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:07:48.758842   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:07:48.894873   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:07:48.946907   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:07:49.084938   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:07:49.201925   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:07:49.219675   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:07:49.220891   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:07:49.220951   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:07:49.387310   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:07:46.917886   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:48.919793   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:46.324627   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:46.324653   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:46.382068   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:46.382102   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:46.424672   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:46.424709   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:46.466659   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:46.466692   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:46.484868   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:46.484898   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:46.614688   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:46.614720   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:46.666805   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:46.666846   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:47.098854   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:47.098899   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:49.653042   72304 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8444/healthz ...
	I0425 20:07:49.657843   72304 api_server.go:279] https://192.168.39.123:8444/healthz returned 200:
	ok
	I0425 20:07:49.659251   72304 api_server.go:141] control plane version: v1.30.0
	I0425 20:07:49.659285   72304 api_server.go:131] duration metric: took 4.061499319s to wait for apiserver health ...
	I0425 20:07:49.659295   72304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:07:49.659321   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:07:49.659380   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:07:49.709699   72304 cri.go:89] found id: "7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:49.709721   72304 cri.go:89] found id: ""
	I0425 20:07:49.709729   72304 logs.go:276] 1 containers: [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa]
	I0425 20:07:49.709795   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.715369   72304 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:07:49.715429   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:07:49.773517   72304 cri.go:89] found id: "430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:49.773544   72304 cri.go:89] found id: ""
	I0425 20:07:49.773554   72304 logs.go:276] 1 containers: [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3]
	I0425 20:07:49.773617   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.778984   72304 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:07:49.779071   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:07:49.825707   72304 cri.go:89] found id: "2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:49.825739   72304 cri.go:89] found id: ""
	I0425 20:07:49.825746   72304 logs.go:276] 1 containers: [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1]
	I0425 20:07:49.825790   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.830613   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:07:49.830678   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:07:49.872068   72304 cri.go:89] found id: "a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:49.872094   72304 cri.go:89] found id: ""
	I0425 20:07:49.872104   72304 logs.go:276] 1 containers: [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075]
	I0425 20:07:49.872166   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.877311   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:07:49.877383   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:07:49.930182   72304 cri.go:89] found id: "bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:49.930216   72304 cri.go:89] found id: ""
	I0425 20:07:49.930228   72304 logs.go:276] 1 containers: [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c]
	I0425 20:07:49.930283   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.935415   72304 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:07:49.935484   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:07:49.985377   72304 cri.go:89] found id: "ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:49.985404   72304 cri.go:89] found id: ""
	I0425 20:07:49.985412   72304 logs.go:276] 1 containers: [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4]
	I0425 20:07:49.985469   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:49.991021   72304 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:07:49.991092   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:07:50.037755   72304 cri.go:89] found id: ""
	I0425 20:07:50.037787   72304 logs.go:276] 0 containers: []
	W0425 20:07:50.037802   72304 logs.go:278] No container was found matching "kindnet"
	I0425 20:07:50.037811   72304 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:07:50.037875   72304 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:07:50.083706   72304 cri.go:89] found id: "7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.083731   72304 cri.go:89] found id: "c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.083735   72304 cri.go:89] found id: ""
	I0425 20:07:50.083742   72304 logs.go:276] 2 containers: [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5]
	I0425 20:07:50.083793   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.088730   72304 ssh_runner.go:195] Run: which crictl
	I0425 20:07:50.094339   72304 logs.go:123] Gathering logs for etcd [430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3] ...
	I0425 20:07:50.094371   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 430ba8aceb30fca3ba508440ca119f019b4acd164c99cf55f219279c620954a3"
	I0425 20:07:50.161538   72304 logs.go:123] Gathering logs for storage-provisioner [7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06] ...
	I0425 20:07:50.161573   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7aef2f269df51d0807c87f189ec0e9b4465197a2eff8d2c24af70daf72326d06"
	I0425 20:07:50.204178   72304 logs.go:123] Gathering logs for storage-provisioner [c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5] ...
	I0425 20:07:50.204211   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1088dde2fde0bf8a5ea8fcc26492a14e20dc3b99378487a9148dc764f00a9a5"
	I0425 20:07:50.251315   72304 logs.go:123] Gathering logs for container status ...
	I0425 20:07:50.251344   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:07:50.315859   72304 logs.go:123] Gathering logs for kube-proxy [bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c] ...
	I0425 20:07:50.315886   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb19806d4c42c3469ca06ba18226323a4d5542d9b7d34f64896c049d4fc6c71c"
	I0425 20:07:50.367787   72304 logs.go:123] Gathering logs for kube-controller-manager [ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4] ...
	I0425 20:07:50.367829   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae2f5c52c77d76e8207ebf0a67646e6dd6e7db24c04b6b6480c4ebae1448dfc4"
	I0425 20:07:50.429509   72304 logs.go:123] Gathering logs for kubelet ...
	I0425 20:07:50.429541   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:07:50.488723   72304 logs.go:123] Gathering logs for dmesg ...
	I0425 20:07:50.488759   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:07:50.506838   72304 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:07:50.506879   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:07:50.629496   72304 logs.go:123] Gathering logs for kube-apiserver [7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa] ...
	I0425 20:07:50.629526   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c6a6c0bef83a43ce876e4424099fd3fef69ed97692a83951bcf11ce1056e5aa"
	I0425 20:07:50.689286   72304 logs.go:123] Gathering logs for coredns [2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1] ...
	I0425 20:07:50.689321   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2370c81d0f1fb2e8b5a331a8c9c71e5bc06983175371957e6b5725a3f067bdd1"
	I0425 20:07:50.731343   72304 logs.go:123] Gathering logs for kube-scheduler [a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075] ...
	I0425 20:07:50.731373   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a553ccfa984650048af11610d2e753e103fe261a5569421f5165423bbfe86075"
	I0425 20:07:50.772085   72304 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:07:50.772114   72304 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:07:49.389887   72712 out.go:204]   - Booting up control plane ...
	I0425 20:07:49.390011   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:07:49.395060   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:07:49.398108   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:07:49.398220   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:07:49.402596   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:07:53.651817   72304 system_pods.go:59] 8 kube-system pods found
	I0425 20:07:53.651845   72304 system_pods.go:61] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.651850   72304 system_pods.go:61] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.651854   72304 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.651859   72304 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.651862   72304 system_pods.go:61] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.651865   72304 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.651872   72304 system_pods.go:61] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.651878   72304 system_pods.go:61] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.651885   72304 system_pods.go:74] duration metric: took 3.992584481s to wait for pod list to return data ...
	I0425 20:07:53.651892   72304 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:07:53.654617   72304 default_sa.go:45] found service account: "default"
	I0425 20:07:53.654641   72304 default_sa.go:55] duration metric: took 2.742232ms for default service account to be created ...
	I0425 20:07:53.654649   72304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:07:53.660082   72304 system_pods.go:86] 8 kube-system pods found
	I0425 20:07:53.660110   72304 system_pods.go:89] "coredns-7db6d8ff4d-z6ls5" [5ef8d9f5-f623-4632-bb88-7e5c60220725] Running
	I0425 20:07:53.660116   72304 system_pods.go:89] "etcd-default-k8s-diff-port-142196" [e48d8961-a602-45cb-9330-7e405e364fc1] Running
	I0425 20:07:53.660121   72304 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-142196" [7744abb6-2345-4c2b-befd-85d94ed7eb0a] Running
	I0425 20:07:53.660127   72304 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-142196" [45b42996-e3bf-4c5e-9b93-cde6670fb346] Running
	I0425 20:07:53.660131   72304 system_pods.go:89] "kube-proxy-bqmtp" [dc6ef58b-09d4-4e88-925b-b5a3afc68361] Running
	I0425 20:07:53.660135   72304 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-142196" [54737b1e-3064-4692-82bf-694ba80d1b0f] Running
	I0425 20:07:53.660142   72304 system_pods.go:89] "metrics-server-569cc877fc-cphk6" [e42da9f0-2bd7-499e-a220-ac9fcbcfdc9f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:07:53.660148   72304 system_pods.go:89] "storage-provisioner" [82be8699-608a-4aff-aac4-c709cba8655b] Running
	I0425 20:07:53.660154   72304 system_pods.go:126] duration metric: took 5.50043ms to wait for k8s-apps to be running ...
	I0425 20:07:53.660161   72304 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:07:53.660201   72304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:07:53.677461   72304 system_svc.go:56] duration metric: took 17.289854ms WaitForService to wait for kubelet
	I0425 20:07:53.677499   72304 kubeadm.go:576] duration metric: took 4m24.146512306s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:07:53.677524   72304 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:07:53.681527   72304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:07:53.681562   72304 node_conditions.go:123] node cpu capacity is 2
	I0425 20:07:53.681576   72304 node_conditions.go:105] duration metric: took 4.045221ms to run NodePressure ...
	I0425 20:07:53.681591   72304 start.go:240] waiting for startup goroutines ...
	I0425 20:07:53.681605   72304 start.go:245] waiting for cluster config update ...
	I0425 20:07:53.681622   72304 start.go:254] writing updated cluster config ...
	I0425 20:07:53.682002   72304 ssh_runner.go:195] Run: rm -f paused
	I0425 20:07:53.732056   72304 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:07:53.734302   72304 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-142196" cluster and "default" namespace by default
	I0425 20:07:51.419808   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:53.916090   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:55.917139   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:07:58.417609   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:00.917152   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:02.918628   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.419508   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:05.765908   72220 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.899694836s)
	I0425 20:08:05.765989   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:05.787711   72220 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0425 20:08:05.801717   72220 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:08:05.813710   72220 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:08:05.813741   72220 kubeadm.go:156] found existing configuration files:
	
	I0425 20:08:05.813802   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:08:05.825122   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:08:05.825202   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:08:05.837118   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:08:05.848807   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:08:05.848880   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:08:05.862028   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.873795   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:08:05.873919   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:08:05.885577   72220 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:08:05.897605   72220 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:08:05.897685   72220 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:08:05.909284   72220 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:08:05.965574   72220 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0425 20:08:05.965663   72220 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:08:06.133359   72220 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:08:06.133525   72220 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:08:06.133675   72220 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:08:06.391437   72220 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:08:06.393805   72220 out.go:204]   - Generating certificates and keys ...
	I0425 20:08:06.393905   72220 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:08:06.393994   72220 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:08:06.394121   72220 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:08:06.394237   72220 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:08:06.394332   72220 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:08:06.394417   72220 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:08:06.394514   72220 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:08:06.396093   72220 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:08:06.396202   72220 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:08:06.396300   72220 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:08:06.396358   72220 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:08:06.396423   72220 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:08:06.683452   72220 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:08:06.778456   72220 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0425 20:08:06.923709   72220 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:08:07.079685   72220 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:08:07.170533   72220 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:08:07.171070   72220 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:08:07.173798   72220 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:08:07.175699   72220 out.go:204]   - Booting up control plane ...
	I0425 20:08:07.175824   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:08:07.175924   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:08:07.176060   72220 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:08:07.197685   72220 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:08:07.200579   72220 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:08:07.200645   72220 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:08:07.354665   72220 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0425 20:08:07.354779   72220 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0425 20:08:07.855900   72220 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.56346ms
	I0425 20:08:07.856015   72220 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0425 20:08:07.423114   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:09.425115   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:13.358654   72220 kubeadm.go:309] [api-check] The API server is healthy after 5.502458238s
	I0425 20:08:13.388381   72220 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0425 20:08:13.908867   72220 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0425 20:08:13.945417   72220 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0425 20:08:13.945708   72220 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-744552 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0425 20:08:13.959901   72220 kubeadm.go:309] [bootstrap-token] Using token: r2mxoe.iuelddsr8gvoq1wo
	I0425 20:08:13.961409   72220 out.go:204]   - Configuring RBAC rules ...
	I0425 20:08:13.961552   72220 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0425 20:08:13.970435   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0425 20:08:13.978933   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0425 20:08:13.982503   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0425 20:08:13.987029   72220 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0425 20:08:13.990969   72220 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0425 20:08:14.103051   72220 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0425 20:08:14.554715   72220 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0425 20:08:15.105951   72220 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0425 20:08:15.107134   72220 kubeadm.go:309] 
	I0425 20:08:15.107222   72220 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0425 20:08:15.107236   72220 kubeadm.go:309] 
	I0425 20:08:15.107336   72220 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0425 20:08:15.107349   72220 kubeadm.go:309] 
	I0425 20:08:15.107379   72220 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0425 20:08:15.107463   72220 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0425 20:08:15.107550   72220 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0425 20:08:15.107560   72220 kubeadm.go:309] 
	I0425 20:08:15.107657   72220 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0425 20:08:15.107668   72220 kubeadm.go:309] 
	I0425 20:08:15.107735   72220 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0425 20:08:15.107747   72220 kubeadm.go:309] 
	I0425 20:08:15.107807   72220 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0425 20:08:15.107935   72220 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0425 20:08:15.108030   72220 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0425 20:08:15.108042   72220 kubeadm.go:309] 
	I0425 20:08:15.108154   72220 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0425 20:08:15.108269   72220 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0425 20:08:15.108280   72220 kubeadm.go:309] 
	I0425 20:08:15.108395   72220 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.108556   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed \
	I0425 20:08:15.108594   72220 kubeadm.go:309] 	--control-plane 
	I0425 20:08:15.108603   72220 kubeadm.go:309] 
	I0425 20:08:15.108719   72220 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0425 20:08:15.108730   72220 kubeadm.go:309] 
	I0425 20:08:15.108849   72220 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token r2mxoe.iuelddsr8gvoq1wo \
	I0425 20:08:15.109004   72220 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b44516c1e48199272b26fdfb99d3f47b0e2136001d95c40aba309a88053212ed 
	I0425 20:08:15.109717   72220 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:08:15.109778   72220 cni.go:84] Creating CNI manager for ""
	I0425 20:08:15.109797   72220 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 20:08:15.111712   72220 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0425 20:08:11.918414   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:14.420753   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:15.113288   72220 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0425 20:08:15.129693   72220 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0425 20:08:15.157631   72220 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0425 20:08:15.157709   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.157760   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-744552 minikube.k8s.io/updated_at=2024_04_25T20_08_15_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=9b1f143bb00c241dc73ba7b698e8f6c1855732d7 minikube.k8s.io/name=no-preload-744552 minikube.k8s.io/primary=true
	I0425 20:08:15.374198   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:15.418592   72220 ops.go:34] apiserver oom_adj: -16
	I0425 20:08:15.874721   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.374969   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.875091   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.375038   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:17.874685   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:18.374802   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:16.917617   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:19.421721   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:18.874931   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.374961   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:19.874349   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.374787   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:20.875130   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.374959   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.874325   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.374798   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:22.875034   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:23.374899   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:21.917898   71966 pod_ready.go:102] pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:22.917132   71966 pod_ready.go:81] duration metric: took 4m0.007062693s for pod "metrics-server-569cc877fc-mlkqr" in "kube-system" namespace to be "Ready" ...
	E0425 20:08:22.917156   71966 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0425 20:08:22.917164   71966 pod_ready.go:38] duration metric: took 4m4.548150095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:22.917179   71966 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:22.917211   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:22.917270   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:22.982604   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:22.982631   71966 cri.go:89] found id: ""
	I0425 20:08:22.982640   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:22.982698   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:22.988558   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:22.988618   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:23.031937   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.031964   71966 cri.go:89] found id: ""
	I0425 20:08:23.031973   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:23.032031   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.037315   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:23.037371   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:23.089839   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.089862   71966 cri.go:89] found id: ""
	I0425 20:08:23.089872   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:23.089936   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.095247   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:23.095309   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:23.136257   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.136286   71966 cri.go:89] found id: ""
	I0425 20:08:23.136294   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:23.136357   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.142548   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:23.142608   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:23.186190   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.186229   71966 cri.go:89] found id: ""
	I0425 20:08:23.186239   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:23.186301   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.191422   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:23.191494   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:23.242326   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.242361   71966 cri.go:89] found id: ""
	I0425 20:08:23.242371   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:23.242437   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.248578   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:23.248642   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:23.286781   71966 cri.go:89] found id: ""
	I0425 20:08:23.286807   71966 logs.go:276] 0 containers: []
	W0425 20:08:23.286817   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:23.286823   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:23.286885   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:23.334728   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:23.334754   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.334761   71966 cri.go:89] found id: ""
	I0425 20:08:23.334770   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:23.334831   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.340288   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:23.344787   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:23.344808   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:23.401830   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:23.401865   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:23.425683   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:23.425715   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:23.568527   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:23.568558   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:23.608747   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:23.608776   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:23.647962   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:23.647996   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:23.687270   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:23.687308   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:23.745081   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:23.745112   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:23.799375   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:23.799405   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:23.853199   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:23.853232   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:23.896535   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:23.896571   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:23.964317   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:23.964350   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:24.013196   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:24.013231   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:23.874275   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.374250   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:24.874396   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.374767   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:25.874968   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.374333   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:26.874916   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.374369   72220 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0425 20:08:27.499044   72220 kubeadm.go:1107] duration metric: took 12.341393953s to wait for elevateKubeSystemPrivileges
	W0425 20:08:27.499078   72220 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0425 20:08:27.499087   72220 kubeadm.go:393] duration metric: took 5m17.572541498s to StartCluster
	I0425 20:08:27.499108   72220 settings.go:142] acquiring lock: {Name:mka80a7409c232572a87a7e873102b4c60b15b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.499189   72220 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 20:08:27.500940   72220 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/kubeconfig: {Name:mk94ad8468cf8a209be037eb28fe2d9a6a9aec2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 20:08:27.501192   72220 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.142 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0425 20:08:27.503257   72220 out.go:177] * Verifying Kubernetes components...
	I0425 20:08:27.501308   72220 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0425 20:08:27.501405   72220 config.go:182] Loaded profile config "no-preload-744552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 20:08:27.505389   72220 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0425 20:08:27.505403   72220 addons.go:69] Setting storage-provisioner=true in profile "no-preload-744552"
	I0425 20:08:27.505438   72220 addons.go:234] Setting addon storage-provisioner=true in "no-preload-744552"
	W0425 20:08:27.505453   72220 addons.go:243] addon storage-provisioner should already be in state true
	I0425 20:08:27.505490   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505505   72220 addons.go:69] Setting metrics-server=true in profile "no-preload-744552"
	I0425 20:08:27.505535   72220 addons.go:234] Setting addon metrics-server=true in "no-preload-744552"
	W0425 20:08:27.505546   72220 addons.go:243] addon metrics-server should already be in state true
	I0425 20:08:27.505574   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.505895   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.505922   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.505492   72220 addons.go:69] Setting default-storageclass=true in profile "no-preload-744552"
	I0425 20:08:27.505990   72220 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-744552"
	I0425 20:08:27.505952   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506099   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.506418   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.506467   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.523666   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0425 20:08:27.526950   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0425 20:08:27.526972   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.526981   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I0425 20:08:27.527536   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527606   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.527662   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.527683   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528039   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528059   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528122   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528228   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.528242   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.528601   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528644   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.528712   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.528735   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.528800   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.529228   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.529246   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.532151   72220 addons.go:234] Setting addon default-storageclass=true in "no-preload-744552"
	W0425 20:08:27.532171   72220 addons.go:243] addon default-storageclass should already be in state true
	I0425 20:08:27.532204   72220 host.go:66] Checking if "no-preload-744552" exists ...
	I0425 20:08:27.532543   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.532582   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.547165   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43549
	I0425 20:08:27.547700   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.548354   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.548368   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.548675   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.548793   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.550640   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.554301   72220 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0425 20:08:27.553061   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0425 20:08:27.553099   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0425 20:08:27.555613   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0425 20:08:27.555630   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0425 20:08:27.555652   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.556177   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556181   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.556724   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556739   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.556868   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.556879   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.557128   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.557700   72220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 20:08:27.557729   72220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 20:08:27.558142   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.558406   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.559420   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.559990   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.560057   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.560076   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.560177   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.560333   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.560549   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.560967   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.562839   72220 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0425 20:08:27.564442   72220 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.564480   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0425 20:08:27.564517   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.567912   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.568153   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.568171   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.570321   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.570514   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.570709   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.570945   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.578396   72220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
	I0425 20:08:27.586629   72220 main.go:141] libmachine: () Calling .GetVersion
	I0425 20:08:27.587070   72220 main.go:141] libmachine: Using API Version  1
	I0425 20:08:27.587082   72220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 20:08:27.587584   72220 main.go:141] libmachine: () Calling .GetMachineName
	I0425 20:08:27.587736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetState
	I0425 20:08:27.589708   72220 main.go:141] libmachine: (no-preload-744552) Calling .DriverName
	I0425 20:08:27.589937   72220 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.589948   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0425 20:08:27.589961   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHHostname
	I0425 20:08:27.592640   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.592983   72220 main.go:141] libmachine: (no-preload-744552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:c5:04", ip: ""} in network mk-no-preload-744552: {Iface:virbr2 ExpiryTime:2024-04-25 21:02:42 +0000 UTC Type:0 Mac:52:54:00:2f:c5:04 Iaid: IPaddr:192.168.72.142 Prefix:24 Hostname:no-preload-744552 Clientid:01:52:54:00:2f:c5:04}
	I0425 20:08:27.593007   72220 main.go:141] libmachine: (no-preload-744552) DBG | domain no-preload-744552 has defined IP address 192.168.72.142 and MAC address 52:54:00:2f:c5:04 in network mk-no-preload-744552
	I0425 20:08:27.593261   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHPort
	I0425 20:08:27.593541   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHKeyPath
	I0425 20:08:27.593736   72220 main.go:141] libmachine: (no-preload-744552) Calling .GetSSHUsername
	I0425 20:08:27.593906   72220 sshutil.go:53] new ssh client: &{IP:192.168.72.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/no-preload-744552/id_rsa Username:docker}
	I0425 20:08:27.783858   72220 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0425 20:08:27.820917   72220 node_ready.go:35] waiting up to 6m0s for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832349   72220 node_ready.go:49] node "no-preload-744552" has status "Ready":"True"
	I0425 20:08:27.832377   72220 node_ready.go:38] duration metric: took 11.423909ms for node "no-preload-744552" to be "Ready" ...
	I0425 20:08:27.832390   72220 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:27.844475   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:27.886461   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0425 20:08:27.886483   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0425 20:08:27.899413   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0425 20:08:27.931511   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0425 20:08:27.935073   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0425 20:08:27.935098   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0425 20:08:27.989052   72220 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:27.989082   72220 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0425 20:08:28.016326   72220 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0425 20:08:28.551863   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551894   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.551964   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.551976   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552255   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552280   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552292   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552315   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552358   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.552397   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552405   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552414   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.552421   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.552571   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552597   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.552710   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.552736   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.578416   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.578445   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.578730   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.578776   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.578789   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.945831   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.945861   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946170   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946191   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946214   72220 main.go:141] libmachine: Making call to close driver server
	I0425 20:08:28.946224   72220 main.go:141] libmachine: (no-preload-744552) Calling .Close
	I0425 20:08:28.946531   72220 main.go:141] libmachine: Successfully made call to close driver server
	I0425 20:08:28.946549   72220 main.go:141] libmachine: Making call to close connection to plugin binary
	I0425 20:08:28.946560   72220 addons.go:470] Verifying addon metrics-server=true in "no-preload-744552"
	I0425 20:08:28.946570   72220 main.go:141] libmachine: (no-preload-744552) DBG | Closing plugin on server side
	I0425 20:08:28.948485   72220 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0425 20:08:27.005360   71966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:27.024856   71966 api_server.go:72] duration metric: took 4m14.401244231s to wait for apiserver process to appear ...
	I0425 20:08:27.024881   71966 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:27.024922   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:27.024982   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:27.072098   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:27.072129   71966 cri.go:89] found id: ""
	I0425 20:08:27.072140   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:27.072210   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.077726   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:27.077793   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:27.118834   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:27.118855   71966 cri.go:89] found id: ""
	I0425 20:08:27.118864   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:27.118917   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.125277   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:27.125347   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:27.167036   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.167064   71966 cri.go:89] found id: ""
	I0425 20:08:27.167074   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:27.167131   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.172390   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:27.172468   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:27.212933   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:27.212957   71966 cri.go:89] found id: ""
	I0425 20:08:27.212967   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:27.213022   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.218033   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:27.218083   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:27.259294   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:27.259321   71966 cri.go:89] found id: ""
	I0425 20:08:27.259331   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:27.259384   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.265537   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:27.265610   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:27.312145   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:27.312174   71966 cri.go:89] found id: ""
	I0425 20:08:27.312183   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:27.312240   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.318346   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:27.318405   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:27.362467   71966 cri.go:89] found id: ""
	I0425 20:08:27.362495   71966 logs.go:276] 0 containers: []
	W0425 20:08:27.362504   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:27.362509   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:27.362569   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:27.406810   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:27.406834   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.406839   71966 cri.go:89] found id: ""
	I0425 20:08:27.406846   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:27.406903   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.412431   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:27.421695   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:27.421725   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:27.472832   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:27.472863   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:27.535799   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:27.535830   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:28.004964   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:28.005006   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:28.072378   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:28.072417   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:28.236479   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:28.236523   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:28.296095   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:28.296133   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:28.351290   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:28.351314   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:28.400529   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:28.400567   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:28.459149   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:28.459178   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:28.507818   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:28.507844   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:28.565596   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:28.565627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:28.588509   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:28.588535   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:29.403321   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:08:29.403717   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:29.404001   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:28.950127   72220 addons.go:505] duration metric: took 1.448816058s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0425 20:08:29.862142   72220 pod_ready.go:102] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"False"
	I0425 20:08:30.851653   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.851677   72220 pod_ready.go:81] duration metric: took 3.007171918s for pod "coredns-7db6d8ff4d-2mxxt" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.851689   72220 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857090   72220 pod_ready.go:92] pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.857108   72220 pod_ready.go:81] duration metric: took 5.412841ms for pod "coredns-7db6d8ff4d-xdl2d" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.857117   72220 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863315   72220 pod_ready.go:92] pod "etcd-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.863331   72220 pod_ready.go:81] duration metric: took 6.207835ms for pod "etcd-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.863339   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867557   72220 pod_ready.go:92] pod "kube-apiserver-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.867579   72220 pod_ready.go:81] duration metric: took 4.23311ms for pod "kube-apiserver-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.867590   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872391   72220 pod_ready.go:92] pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:30.872407   72220 pod_ready.go:81] duration metric: took 4.810397ms for pod "kube-controller-manager-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:30.872415   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249226   72220 pod_ready.go:92] pod "kube-proxy-22w7x" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.249259   72220 pod_ready.go:81] duration metric: took 376.837327ms for pod "kube-proxy-22w7x" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.249284   72220 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649908   72220 pod_ready.go:92] pod "kube-scheduler-no-preload-744552" in "kube-system" namespace has status "Ready":"True"
	I0425 20:08:31.649934   72220 pod_ready.go:81] duration metric: took 400.641991ms for pod "kube-scheduler-no-preload-744552" in "kube-system" namespace to be "Ready" ...
	I0425 20:08:31.649945   72220 pod_ready.go:38] duration metric: took 3.817541056s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0425 20:08:31.649962   72220 api_server.go:52] waiting for apiserver process to appear ...
	I0425 20:08:31.650025   72220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 20:08:31.684094   72220 api_server.go:72] duration metric: took 4.182865357s to wait for apiserver process to appear ...
	I0425 20:08:31.684123   72220 api_server.go:88] waiting for apiserver healthz status ...
	I0425 20:08:31.684146   72220 api_server.go:253] Checking apiserver healthz at https://192.168.72.142:8443/healthz ...
	I0425 20:08:31.689688   72220 api_server.go:279] https://192.168.72.142:8443/healthz returned 200:
	ok
	I0425 20:08:31.690939   72220 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.690963   72220 api_server.go:131] duration metric: took 6.831773ms to wait for apiserver health ...
	I0425 20:08:31.690973   72220 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.853816   72220 system_pods.go:59] 9 kube-system pods found
	I0425 20:08:31.853849   72220 system_pods.go:61] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:31.853856   72220 system_pods.go:61] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:31.853861   72220 system_pods.go:61] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:31.853868   72220 system_pods.go:61] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:31.853872   72220 system_pods.go:61] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:31.853877   72220 system_pods.go:61] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:31.853881   72220 system_pods.go:61] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:31.853889   72220 system_pods.go:61] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:31.853894   72220 system_pods.go:61] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:31.853907   72220 system_pods.go:74] duration metric: took 162.928561ms to wait for pod list to return data ...
	I0425 20:08:31.853916   72220 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:32.049906   72220 default_sa.go:45] found service account: "default"
	I0425 20:08:32.049932   72220 default_sa.go:55] duration metric: took 196.003422ms for default service account to be created ...
	I0425 20:08:32.049942   72220 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:32.255245   72220 system_pods.go:86] 9 kube-system pods found
	I0425 20:08:32.255290   72220 system_pods.go:89] "coredns-7db6d8ff4d-2mxxt" [44599c42-87cd-44ff-9377-fd52993919f6] Running
	I0425 20:08:32.255298   72220 system_pods.go:89] "coredns-7db6d8ff4d-xdl2d" [4f11bf4f-f370-4957-95a1-773d255d227b] Running
	I0425 20:08:32.255304   72220 system_pods.go:89] "etcd-no-preload-744552" [d3c2e3ca-06d0-4bdd-b536-98a834704b71] Running
	I0425 20:08:32.255311   72220 system_pods.go:89] "kube-apiserver-no-preload-744552" [bf22f5f5-7e44-4251-95bd-5836e63d5701] Running
	I0425 20:08:32.255317   72220 system_pods.go:89] "kube-controller-manager-no-preload-744552" [1f5e30c7-4610-493a-af09-17311e47dbae] Running
	I0425 20:08:32.255322   72220 system_pods.go:89] "kube-proxy-22w7x" [82dda9cd-3cf5-4fdd-b4b6-f88e0360f513] Running
	I0425 20:08:32.255328   72220 system_pods.go:89] "kube-scheduler-no-preload-744552" [4fba3af8-e9d9-416f-b3fd-0a1a8dbabd55] Running
	I0425 20:08:32.255338   72220 system_pods.go:89] "metrics-server-569cc877fc-zpj9f" [49e3f66c-0633-497b-81c9-2d68f1eeb45f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:32.255348   72220 system_pods.go:89] "storage-provisioner" [1960de28-d946-4cfb-99fd-dd89fd7f6e67] Running
	I0425 20:08:32.255368   72220 system_pods.go:126] duration metric: took 205.41905ms to wait for k8s-apps to be running ...
	I0425 20:08:32.255378   72220 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:32.255429   72220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:32.274141   72220 system_svc.go:56] duration metric: took 18.75721ms WaitForService to wait for kubelet
	I0425 20:08:32.274173   72220 kubeadm.go:576] duration metric: took 4.77294686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:32.274198   72220 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:32.449699   72220 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:32.449727   72220 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:32.449741   72220 node_conditions.go:105] duration metric: took 175.536406ms to run NodePressure ...
	I0425 20:08:32.449755   72220 start.go:240] waiting for startup goroutines ...
	I0425 20:08:32.449765   72220 start.go:245] waiting for cluster config update ...
	I0425 20:08:32.449778   72220 start.go:254] writing updated cluster config ...
	I0425 20:08:32.450108   72220 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:32.503317   72220 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:32.505391   72220 out.go:177] * Done! kubectl is now configured to use "no-preload-744552" cluster and "default" namespace by default
	I0425 20:08:31.153636   71966 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0425 20:08:31.158526   71966 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0425 20:08:31.159775   71966 api_server.go:141] control plane version: v1.30.0
	I0425 20:08:31.159817   71966 api_server.go:131] duration metric: took 4.134911832s to wait for apiserver health ...
	I0425 20:08:31.159827   71966 system_pods.go:43] waiting for kube-system pods to appear ...
	I0425 20:08:31.159847   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:08:31.159890   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:08:31.201597   71966 cri.go:89] found id: "911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:31.201616   71966 cri.go:89] found id: ""
	I0425 20:08:31.201625   71966 logs.go:276] 1 containers: [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5]
	I0425 20:08:31.201667   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.206973   71966 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:08:31.207039   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:08:31.248400   71966 cri.go:89] found id: "26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:31.248424   71966 cri.go:89] found id: ""
	I0425 20:08:31.248435   71966 logs.go:276] 1 containers: [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650]
	I0425 20:08:31.248496   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.253822   71966 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:08:31.253879   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:08:31.298921   71966 cri.go:89] found id: "8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:31.298946   71966 cri.go:89] found id: ""
	I0425 20:08:31.298956   71966 logs.go:276] 1 containers: [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0]
	I0425 20:08:31.299003   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.304691   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:08:31.304758   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:08:31.351773   71966 cri.go:89] found id: "3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:31.351796   71966 cri.go:89] found id: ""
	I0425 20:08:31.351804   71966 logs.go:276] 1 containers: [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4]
	I0425 20:08:31.351851   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.356599   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:08:31.356651   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:08:31.399655   71966 cri.go:89] found id: "1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:31.399678   71966 cri.go:89] found id: ""
	I0425 20:08:31.399686   71966 logs.go:276] 1 containers: [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149]
	I0425 20:08:31.399740   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.405103   71966 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:08:31.405154   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:08:31.452763   71966 cri.go:89] found id: "df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:31.452785   71966 cri.go:89] found id: ""
	I0425 20:08:31.452794   71966 logs.go:276] 1 containers: [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86]
	I0425 20:08:31.452840   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.457788   71966 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:08:31.457838   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:08:31.503746   71966 cri.go:89] found id: ""
	I0425 20:08:31.503780   71966 logs.go:276] 0 containers: []
	W0425 20:08:31.503791   71966 logs.go:278] No container was found matching "kindnet"
	I0425 20:08:31.503798   71966 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0425 20:08:31.503868   71966 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0425 20:08:31.548517   71966 cri.go:89] found id: "cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:31.548543   71966 cri.go:89] found id: "84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:31.548555   71966 cri.go:89] found id: ""
	I0425 20:08:31.548565   71966 logs.go:276] 2 containers: [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934]
	I0425 20:08:31.548631   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.553673   71966 ssh_runner.go:195] Run: which crictl
	I0425 20:08:31.558271   71966 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:08:31.558290   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:08:31.974349   71966 logs.go:123] Gathering logs for kubelet ...
	I0425 20:08:31.974387   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0425 20:08:32.033292   71966 logs.go:123] Gathering logs for dmesg ...
	I0425 20:08:32.033327   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:08:32.050762   71966 logs.go:123] Gathering logs for etcd [26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650] ...
	I0425 20:08:32.050791   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f6a9b78dc2364cde306eeeb8c1bffdb767ccfa18f1dba7fc60d7fb56155650"
	I0425 20:08:32.101591   71966 logs.go:123] Gathering logs for coredns [8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0] ...
	I0425 20:08:32.101627   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8acd5626916a29dbff0efb87459e3917ff0ec7041e8cea32546d5b2cb498d6f0"
	I0425 20:08:32.142626   71966 logs.go:123] Gathering logs for kube-controller-manager [df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86] ...
	I0425 20:08:32.142652   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 df45510448ab334e6e5a767ceb1671e2676615d99ce59947e4d78740bac2fd86"
	I0425 20:08:32.203270   71966 logs.go:123] Gathering logs for storage-provisioner [cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e] ...
	I0425 20:08:32.203315   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf330fbdb7c0d4cb3f87734e256041e7f3f9b62da73096009782dea75337de3e"
	I0425 20:08:32.247021   71966 logs.go:123] Gathering logs for storage-provisioner [84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934] ...
	I0425 20:08:32.247048   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84313d4e49ed155b1c669c288f16894b4832fc374413e0c4f9c7741bf29ed934"
	I0425 20:08:32.294900   71966 logs.go:123] Gathering logs for container status ...
	I0425 20:08:32.294936   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:08:32.353902   71966 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:08:32.353934   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0425 20:08:32.488543   71966 logs.go:123] Gathering logs for kube-apiserver [911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5] ...
	I0425 20:08:32.488584   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 911aab4d436ac3c20ed7f96c594f5691bd810f3f924426bb6aacca8185e400f5"
	I0425 20:08:32.569303   71966 logs.go:123] Gathering logs for kube-scheduler [3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4] ...
	I0425 20:08:32.569358   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bae27a3c70b5cd2ebc23b3810f128f43ec7c68b5f9b7b17c2385c4871e16eb4"
	I0425 20:08:32.622767   71966 logs.go:123] Gathering logs for kube-proxy [1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149] ...
	I0425 20:08:32.622802   71966 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c3e9dc1ffc5f27632af467b5c369f88093174f1a85c08dca1c51aeccc91d149"
	I0425 20:08:35.181779   71966 system_pods.go:59] 8 kube-system pods found
	I0425 20:08:35.181813   71966 system_pods.go:61] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.181820   71966 system_pods.go:61] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.181826   71966 system_pods.go:61] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.181832   71966 system_pods.go:61] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.181837   71966 system_pods.go:61] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.181843   71966 system_pods.go:61] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.181851   71966 system_pods.go:61] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.181858   71966 system_pods.go:61] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.181867   71966 system_pods.go:74] duration metric: took 4.022033823s to wait for pod list to return data ...
	I0425 20:08:35.181879   71966 default_sa.go:34] waiting for default service account to be created ...
	I0425 20:08:35.185387   71966 default_sa.go:45] found service account: "default"
	I0425 20:08:35.185413   71966 default_sa.go:55] duration metric: took 3.523751ms for default service account to be created ...
	I0425 20:08:35.185423   71966 system_pods.go:116] waiting for k8s-apps to be running ...
	I0425 20:08:35.195075   71966 system_pods.go:86] 8 kube-system pods found
	I0425 20:08:35.195099   71966 system_pods.go:89] "coredns-7db6d8ff4d-xsptj" [61b974e5-9b6e-4647-81cc-4fd8aa94077c] Running
	I0425 20:08:35.195104   71966 system_pods.go:89] "etcd-embed-certs-512173" [8a901d41-3f11-4b5e-9158-5c9f1bad54e9] Running
	I0425 20:08:35.195109   71966 system_pods.go:89] "kube-apiserver-embed-certs-512173" [edf50203-485d-451e-8499-80bfa068c536] Running
	I0425 20:08:35.195114   71966 system_pods.go:89] "kube-controller-manager-embed-certs-512173" [d07141c4-5777-4496-a178-10fc4654b0ff] Running
	I0425 20:08:35.195118   71966 system_pods.go:89] "kube-proxy-8247p" [0bc053d9-814c-4882-bd11-5111e5a72635] Running
	I0425 20:08:35.195122   71966 system_pods.go:89] "kube-scheduler-embed-certs-512173" [61997b85-a48a-45d4-a4b8-6dbcd51206a3] Running
	I0425 20:08:35.195128   71966 system_pods.go:89] "metrics-server-569cc877fc-mlkqr" [85113896-4f9c-4b53-8bc9-c138b8a643fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0425 20:08:35.195133   71966 system_pods.go:89] "storage-provisioner" [d1cd233f-57aa-4438-b18d-9b82f57c451d] Running
	I0425 20:08:35.195139   71966 system_pods.go:126] duration metric: took 9.711803ms to wait for k8s-apps to be running ...
	I0425 20:08:35.195155   71966 system_svc.go:44] waiting for kubelet service to be running ....
	I0425 20:08:35.195195   71966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:08:35.213494   71966 system_svc.go:56] duration metric: took 18.331225ms WaitForService to wait for kubelet
	I0425 20:08:35.213523   71966 kubeadm.go:576] duration metric: took 4m22.589912913s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0425 20:08:35.213545   71966 node_conditions.go:102] verifying NodePressure condition ...
	I0425 20:08:35.216461   71966 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0425 20:08:35.216481   71966 node_conditions.go:123] node cpu capacity is 2
	I0425 20:08:35.216493   71966 node_conditions.go:105] duration metric: took 2.94061ms to run NodePressure ...
	I0425 20:08:35.216502   71966 start.go:240] waiting for startup goroutines ...
	I0425 20:08:35.216509   71966 start.go:245] waiting for cluster config update ...
	I0425 20:08:35.216518   71966 start.go:254] writing updated cluster config ...
	I0425 20:08:35.216750   71966 ssh_runner.go:195] Run: rm -f paused
	I0425 20:08:35.265836   71966 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0425 20:08:35.269026   71966 out.go:177] * Done! kubectl is now configured to use "embed-certs-512173" cluster and "default" namespace by default
	I0425 20:08:34.404410   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:34.404662   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:08:44.405293   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:08:44.405518   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:04.406406   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:04.406676   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.407969   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:09:44.408240   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:09:44.408259   72712 kubeadm.go:309] 
	I0425 20:09:44.408293   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:09:44.408355   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:09:44.408373   72712 kubeadm.go:309] 
	I0425 20:09:44.408417   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:09:44.408448   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:09:44.408562   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:09:44.408575   72712 kubeadm.go:309] 
	I0425 20:09:44.408655   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:09:44.408684   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:09:44.408711   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:09:44.408718   72712 kubeadm.go:309] 
	I0425 20:09:44.408812   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:09:44.408912   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:09:44.408939   72712 kubeadm.go:309] 
	I0425 20:09:44.409085   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:09:44.409217   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:09:44.409341   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:09:44.409418   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:09:44.409433   72712 kubeadm.go:309] 
	I0425 20:09:44.410319   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:09:44.410423   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:09:44.410510   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0425 20:09:44.410640   72712 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0425 20:09:44.410700   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0425 20:09:45.395830   72712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 20:09:45.412628   72712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0425 20:09:45.423387   72712 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0425 20:09:45.423412   72712 kubeadm.go:156] found existing configuration files:
	
	I0425 20:09:45.423465   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0425 20:09:45.434317   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0425 20:09:45.434389   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0425 20:09:45.445657   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0425 20:09:45.455698   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0425 20:09:45.455772   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0425 20:09:45.466137   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.476140   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0425 20:09:45.476192   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0425 20:09:45.486410   72712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0425 20:09:45.495465   72712 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0425 20:09:45.495522   72712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0425 20:09:45.505410   72712 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0425 20:09:45.726416   72712 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0425 20:11:42.214574   72712 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0425 20:11:42.214715   72712 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0425 20:11:42.216323   72712 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0425 20:11:42.216393   72712 kubeadm.go:309] [preflight] Running pre-flight checks
	I0425 20:11:42.216507   72712 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0425 20:11:42.216650   72712 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0425 20:11:42.216795   72712 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0425 20:11:42.216882   72712 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0425 20:11:42.218766   72712 out.go:204]   - Generating certificates and keys ...
	I0425 20:11:42.218847   72712 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0425 20:11:42.218923   72712 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0425 20:11:42.219042   72712 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0425 20:11:42.219103   72712 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0425 20:11:42.219167   72712 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0425 20:11:42.219237   72712 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0425 20:11:42.219321   72712 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0425 20:11:42.219407   72712 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0425 20:11:42.219519   72712 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0425 20:11:42.219639   72712 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0425 20:11:42.219694   72712 kubeadm.go:309] [certs] Using the existing "sa" key
	I0425 20:11:42.219742   72712 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0425 20:11:42.219786   72712 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0425 20:11:42.219831   72712 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0425 20:11:42.219883   72712 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0425 20:11:42.219929   72712 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0425 20:11:42.220029   72712 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0425 20:11:42.220139   72712 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0425 20:11:42.220204   72712 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0425 20:11:42.220308   72712 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0425 20:11:42.222891   72712 out.go:204]   - Booting up control plane ...
	I0425 20:11:42.222979   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0425 20:11:42.223054   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0425 20:11:42.223129   72712 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0425 20:11:42.223222   72712 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0425 20:11:42.223404   72712 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0425 20:11:42.223459   72712 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0425 20:11:42.223565   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.223835   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.223937   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224165   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224243   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224457   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224541   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.224799   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.224902   72712 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0425 20:11:42.225125   72712 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0425 20:11:42.225134   72712 kubeadm.go:309] 
	I0425 20:11:42.225166   72712 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0425 20:11:42.225204   72712 kubeadm.go:309] 		timed out waiting for the condition
	I0425 20:11:42.225210   72712 kubeadm.go:309] 
	I0425 20:11:42.225239   72712 kubeadm.go:309] 	This error is likely caused by:
	I0425 20:11:42.225267   72712 kubeadm.go:309] 		- The kubelet is not running
	I0425 20:11:42.225352   72712 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0425 20:11:42.225358   72712 kubeadm.go:309] 
	I0425 20:11:42.225446   72712 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0425 20:11:42.225476   72712 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0425 20:11:42.225522   72712 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0425 20:11:42.225533   72712 kubeadm.go:309] 
	I0425 20:11:42.225626   72712 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0425 20:11:42.225714   72712 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0425 20:11:42.225729   72712 kubeadm.go:309] 
	I0425 20:11:42.225875   72712 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0425 20:11:42.225951   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0425 20:11:42.226022   72712 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0425 20:11:42.226096   72712 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0425 20:11:42.226129   72712 kubeadm.go:309] 
	I0425 20:11:42.226162   72712 kubeadm.go:393] duration metric: took 8m0.122692927s to StartCluster
	I0425 20:11:42.226242   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0425 20:11:42.226299   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0425 20:11:42.283295   72712 cri.go:89] found id: ""
	I0425 20:11:42.283320   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.283329   72712 logs.go:278] No container was found matching "kube-apiserver"
	I0425 20:11:42.283335   72712 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0425 20:11:42.283389   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0425 20:11:42.322462   72712 cri.go:89] found id: ""
	I0425 20:11:42.322493   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.322505   72712 logs.go:278] No container was found matching "etcd"
	I0425 20:11:42.322512   72712 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0425 20:11:42.322574   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0425 20:11:42.372329   72712 cri.go:89] found id: ""
	I0425 20:11:42.372355   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.372363   72712 logs.go:278] No container was found matching "coredns"
	I0425 20:11:42.372369   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0425 20:11:42.372416   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0425 20:11:42.420348   72712 cri.go:89] found id: ""
	I0425 20:11:42.420374   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.420382   72712 logs.go:278] No container was found matching "kube-scheduler"
	I0425 20:11:42.420389   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0425 20:11:42.420447   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0425 20:11:42.460274   72712 cri.go:89] found id: ""
	I0425 20:11:42.460317   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.460329   72712 logs.go:278] No container was found matching "kube-proxy"
	I0425 20:11:42.460337   72712 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0425 20:11:42.460395   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0425 20:11:42.503828   72712 cri.go:89] found id: ""
	I0425 20:11:42.503855   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.503867   72712 logs.go:278] No container was found matching "kube-controller-manager"
	I0425 20:11:42.503874   72712 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0425 20:11:42.503933   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0425 20:11:42.545045   72712 cri.go:89] found id: ""
	I0425 20:11:42.545070   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.545086   72712 logs.go:278] No container was found matching "kindnet"
	I0425 20:11:42.545095   72712 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0425 20:11:42.545156   72712 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0425 20:11:42.586389   72712 cri.go:89] found id: ""
	I0425 20:11:42.586413   72712 logs.go:276] 0 containers: []
	W0425 20:11:42.586421   72712 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0425 20:11:42.586429   72712 logs.go:123] Gathering logs for dmesg ...
	I0425 20:11:42.586440   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0425 20:11:42.602835   72712 logs.go:123] Gathering logs for describe nodes ...
	I0425 20:11:42.602863   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0425 20:11:42.695131   72712 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0425 20:11:42.695153   72712 logs.go:123] Gathering logs for CRI-O ...
	I0425 20:11:42.695168   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0425 20:11:42.819889   72712 logs.go:123] Gathering logs for container status ...
	I0425 20:11:42.819922   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0425 20:11:42.869446   72712 logs.go:123] Gathering logs for kubelet ...
	I0425 20:11:42.869474   72712 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0425 20:11:42.927184   72712 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0425 20:11:42.927236   72712 out.go:239] * 
	W0425 20:11:42.927291   72712 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.927311   72712 out.go:239] * 
	W0425 20:11:42.928275   72712 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0425 20:11:42.931353   72712 out.go:177] 
	W0425 20:11:42.932654   72712 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0425 20:11:42.932696   72712 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0425 20:11:42.932713   72712 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0425 20:11:42.934227   72712 out.go:177] 
	
	
	==> CRI-O <==
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.697118580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076539697081051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=888518b5-05e5-44a2-a4b3-3ba1ada8a7ba name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.697867129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8faf421f-a6d8-4ca5-a629-870d5f534d94 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.697969239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8faf421f-a6d8-4ca5-a629-870d5f534d94 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.698024532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8faf421f-a6d8-4ca5-a629-870d5f534d94 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.734710566Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcc02dc8-5391-474c-9ef6-07dad1aba5b8 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.734812899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcc02dc8-5391-474c-9ef6-07dad1aba5b8 name=/runtime.v1.RuntimeService/Version
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.736291849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68a02287-b043-4417-b975-0fcfedca01ed name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.736813108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076539736791533,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68a02287-b043-4417-b975-0fcfedca01ed name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.737761880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e339fcd1-01a6-462e-b8d4-01938c988144 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.737837832Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e339fcd1-01a6-462e-b8d4-01938c988144 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.737871571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e339fcd1-01a6-462e-b8d4-01938c988144 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.776819873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ff94c74-7af4-4517-baf6-2ab2dc40982c name=/runtime.v1.RuntimeService/Version
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.776932267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ff94c74-7af4-4517-baf6-2ab2dc40982c name=/runtime.v1.RuntimeService/Version
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.778201239Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ebacfad-be60-49b1-9790-3a8ce6913373 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.778760497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076539778718705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ebacfad-be60-49b1-9790-3a8ce6913373 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.779366366Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f133e44c-085c-4770-80c3-c9ff7815e2a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.779415282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f133e44c-085c-4770-80c3-c9ff7815e2a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.779445995Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f133e44c-085c-4770-80c3-c9ff7815e2a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.815795533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8a5abde-5520-45b0-9263-f4b73102904e name=/runtime.v1.RuntimeService/Version
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.815870887Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8a5abde-5520-45b0-9263-f4b73102904e name=/runtime.v1.RuntimeService/Version
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.817021347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfce8471-3806-444e-8710-3f9983ec2f6c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.817417742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714076539817400645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfce8471-3806-444e-8710-3f9983ec2f6c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.818203967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11c5d277-e3c5-4737-a4c2-aaab1819a8b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.818252740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11c5d277-e3c5-4737-a4c2-aaab1819a8b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 25 20:22:19 old-k8s-version-210442 crio[650]: time="2024-04-25 20:22:19.818288489Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=11c5d277-e3c5-4737-a4c2-aaab1819a8b3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr25 20:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063840] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050603] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.017598] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.598719] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.716084] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.653602] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.065627] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084851] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.203835] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.167647] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.363402] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.835292] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +0.069736] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.981211] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[ +11.947575] kauditd_printk_skb: 46 callbacks suppressed
	[Apr25 20:07] systemd-fstab-generator[4988]: Ignoring "noauto" option for root device
	[Apr25 20:09] systemd-fstab-generator[5273]: Ignoring "noauto" option for root device
	[  +0.069773] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:22:20 up 19 min,  0 users,  load average: 0.20, 0.08, 0.07
	Linux old-k8s-version-210442 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]: net/http.(*Transport).dial(0xc000292140, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bdb6e0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]: net/http.(*Transport).dialConn(0xc000292140, 0x4f7fe00, 0xc000052030, 0x0, 0xc00038a540, 0x5, 0xc000bdb6e0, 0x24, 0x0, 0xc000c3e120, ...)
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]: net/http.(*Transport).dialConnFor(0xc000292140, 0xc000b973f0)
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]: created by net/http.(*Transport).queueForDial
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]: goroutine 156 [select]:
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc00097b270, 0x1, 0x0, 0x0, 0x0, 0x0)
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000c740c0, 0x0, 0x0)
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000c72000)
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 25 20:22:16 old-k8s-version-210442 kubelet[6687]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Apr 25 20:22:17 old-k8s-version-210442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 129.
	Apr 25 20:22:17 old-k8s-version-210442 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 25 20:22:17 old-k8s-version-210442 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 25 20:22:17 old-k8s-version-210442 kubelet[6698]: I0425 20:22:17.587608    6698 server.go:416] Version: v1.20.0
	Apr 25 20:22:17 old-k8s-version-210442 kubelet[6698]: I0425 20:22:17.587952    6698 server.go:837] Client rotation is on, will bootstrap in background
	Apr 25 20:22:17 old-k8s-version-210442 kubelet[6698]: I0425 20:22:17.590193    6698 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 25 20:22:17 old-k8s-version-210442 kubelet[6698]: W0425 20:22:17.591229    6698 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 25 20:22:17 old-k8s-version-210442 kubelet[6698]: I0425 20:22:17.591343    6698 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210442 -n old-k8s-version-210442
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 2 (249.533427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-210442" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (91.29s)

                                                
                                    

Test pass (244/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 36.93
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0/json-events 16.67
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.14
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.58
22 TestOffline 127.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 216.61
29 TestAddons/parallel/Registry 21
31 TestAddons/parallel/InspektorGadget 11.36
33 TestAddons/parallel/HelmTiller 17.03
35 TestAddons/parallel/CSI 45.96
36 TestAddons/parallel/Headlamp 15.23
37 TestAddons/parallel/CloudSpanner 7.02
38 TestAddons/parallel/LocalPath 66.99
39 TestAddons/parallel/NvidiaDevicePlugin 6.85
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.11
45 TestCertOptions 47.27
46 TestCertExpiration 307.21
48 TestForceSystemdFlag 70.74
49 TestForceSystemdEnv 47.15
51 TestKVMDriverInstallOrUpdate 5.05
55 TestErrorSpam/setup 49.33
56 TestErrorSpam/start 0.37
57 TestErrorSpam/status 0.78
58 TestErrorSpam/pause 1.65
59 TestErrorSpam/unpause 1.81
60 TestErrorSpam/stop 4.92
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 86.86
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 40.32
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.29
72 TestFunctional/serial/CacheCmd/cache/add_local 2.31
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 33.66
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.63
83 TestFunctional/serial/LogsFileCmd 1.66
84 TestFunctional/serial/InvalidService 4.81
86 TestFunctional/parallel/ConfigCmd 0.37
87 TestFunctional/parallel/DashboardCmd 16.74
88 TestFunctional/parallel/DryRun 0.3
89 TestFunctional/parallel/InternationalLanguage 0.17
90 TestFunctional/parallel/StatusCmd 0.84
94 TestFunctional/parallel/ServiceCmdConnect 28.74
95 TestFunctional/parallel/AddonsCmd 0.15
96 TestFunctional/parallel/PersistentVolumeClaim 56.38
98 TestFunctional/parallel/SSHCmd 0.51
99 TestFunctional/parallel/CpCmd 1.44
100 TestFunctional/parallel/MySQL 29.42
101 TestFunctional/parallel/FileSync 0.22
102 TestFunctional/parallel/CertSync 1.37
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
110 TestFunctional/parallel/License 0.51
111 TestFunctional/parallel/Version/short 0.06
112 TestFunctional/parallel/Version/components 0.78
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
117 TestFunctional/parallel/ImageCommands/ImageBuild 5.16
118 TestFunctional/parallel/ImageCommands/Setup 2.1
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.37
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.7
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.71
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.85
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.76
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.35
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 4.97
138 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
140 TestFunctional/parallel/ProfileCmd/profile_list 0.48
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
142 TestFunctional/parallel/MountCmd/any-port 8.63
143 TestFunctional/parallel/ServiceCmd/List 1.23
144 TestFunctional/parallel/ServiceCmd/JSONOutput 1.26
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
146 TestFunctional/parallel/ServiceCmd/Format 0.41
147 TestFunctional/parallel/MountCmd/specific-port 1.96
148 TestFunctional/parallel/ServiceCmd/URL 0.35
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 268.85
157 TestMultiControlPlane/serial/DeployApp 8.45
158 TestMultiControlPlane/serial/PingHostFromPods 1.47
159 TestMultiControlPlane/serial/AddWorkerNode 48.28
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.57
162 TestMultiControlPlane/serial/CopyFile 13.8
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.42
168 TestMultiControlPlane/serial/DeleteSecondaryNode 17.52
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
171 TestMultiControlPlane/serial/RestartCluster 379.45
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.41
173 TestMultiControlPlane/serial/AddSecondaryNode 76.41
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
178 TestJSONOutput/start/Command 98.99
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.75
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.72
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.38
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.05
207 TestMinikubeProfile 93.91
210 TestMountStart/serial/StartWithMountFirst 29.1
211 TestMountStart/serial/VerifyMountFirst 0.38
212 TestMountStart/serial/StartWithMountSecond 29.57
213 TestMountStart/serial/VerifyMountSecond 0.38
214 TestMountStart/serial/DeleteFirst 0.71
215 TestMountStart/serial/VerifyMountPostDelete 0.66
216 TestMountStart/serial/Stop 1.51
217 TestMountStart/serial/RestartStopped 22.93
218 TestMountStart/serial/VerifyMountPostStop 0.38
221 TestMultiNode/serial/FreshStart2Nodes 105.74
222 TestMultiNode/serial/DeployApp2Nodes 5.6
223 TestMultiNode/serial/PingHostFrom2Pods 0.85
224 TestMultiNode/serial/AddNode 43.01
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.24
227 TestMultiNode/serial/CopyFile 7.64
228 TestMultiNode/serial/StopNode 3.18
229 TestMultiNode/serial/StartAfterStop 32.09
231 TestMultiNode/serial/DeleteNode 2.32
233 TestMultiNode/serial/RestartMultiNode 172.01
234 TestMultiNode/serial/ValidateNameConflict 45.52
241 TestScheduledStopUnix 119.12
245 TestRunningBinaryUpgrade 162.51
251 TestPause/serial/Start 109.71
258 TestNetworkPlugins/group/false 3.73
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 112.16
266 TestNoKubernetes/serial/StartWithStopK8s 29.44
267 TestNoKubernetes/serial/Start 50.77
268 TestStoppedBinaryUpgrade/Setup 2.62
269 TestStoppedBinaryUpgrade/Upgrade 161.91
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
271 TestNoKubernetes/serial/ProfileList 0.86
272 TestNoKubernetes/serial/Stop 1.47
273 TestNoKubernetes/serial/StartNoArgs 44.04
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
283 TestNetworkPlugins/group/auto/Start 88.07
284 TestNetworkPlugins/group/kindnet/Start 64.88
285 TestNetworkPlugins/group/calico/Start 112.79
286 TestNetworkPlugins/group/auto/KubeletFlags 0.21
287 TestNetworkPlugins/group/auto/NetCatPod 10.26
288 TestNetworkPlugins/group/auto/DNS 0.23
289 TestNetworkPlugins/group/auto/Localhost 0.18
290 TestNetworkPlugins/group/auto/HairPin 0.17
291 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
292 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
293 TestNetworkPlugins/group/kindnet/NetCatPod 12.26
294 TestNetworkPlugins/group/custom-flannel/Start 85.67
295 TestNetworkPlugins/group/kindnet/DNS 0.19
296 TestNetworkPlugins/group/kindnet/Localhost 0.13
297 TestNetworkPlugins/group/kindnet/HairPin 0.15
298 TestNetworkPlugins/group/enable-default-cni/Start 128.97
299 TestNetworkPlugins/group/calico/ControllerPod 6.01
300 TestNetworkPlugins/group/calico/KubeletFlags 0.26
301 TestNetworkPlugins/group/calico/NetCatPod 13.28
302 TestNetworkPlugins/group/calico/DNS 0.19
303 TestNetworkPlugins/group/calico/Localhost 0.14
304 TestNetworkPlugins/group/calico/HairPin 0.15
305 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
306 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.32
307 TestNetworkPlugins/group/flannel/Start 90.9
308 TestNetworkPlugins/group/custom-flannel/DNS 0.22
309 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
310 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
311 TestNetworkPlugins/group/bridge/Start 85.29
314 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
315 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.33
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
319 TestNetworkPlugins/group/flannel/ControllerPod 6.01
321 TestStartStop/group/no-preload/serial/FirstStart 118.65
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
323 TestNetworkPlugins/group/bridge/NetCatPod 13.31
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
325 TestNetworkPlugins/group/flannel/NetCatPod 14.5
326 TestNetworkPlugins/group/bridge/DNS 0.2
327 TestNetworkPlugins/group/bridge/Localhost 0.15
328 TestNetworkPlugins/group/bridge/HairPin 0.14
329 TestNetworkPlugins/group/flannel/DNS 0.17
330 TestNetworkPlugins/group/flannel/Localhost 0.15
331 TestNetworkPlugins/group/flannel/HairPin 0.13
333 TestStartStop/group/embed-certs/serial/FirstStart 65.24
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 92.3
336 TestStartStop/group/embed-certs/serial/DeployApp 10.32
337 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
339 TestStartStop/group/no-preload/serial/DeployApp 11.33
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
348 TestStartStop/group/embed-certs/serial/SecondStart 649.76
351 TestStartStop/group/no-preload/serial/SecondStart 624.01
352 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 577.8
353 TestStartStop/group/old-k8s-version/serial/Stop 2.31
354 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
365 TestStartStop/group/newest-cni/serial/FirstStart 60.11
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
368 TestStartStop/group/newest-cni/serial/Stop 10.67
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
370 TestStartStop/group/newest-cni/serial/SecondStart 45.44
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
374 TestStartStop/group/newest-cni/serial/Pause 2.49
x
+
TestDownloadOnly/v1.20.0/json-events (36.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-587952 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-587952 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (36.934491046s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (36.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-587952
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-587952: exit status 85 (68.857221ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-587952 | jenkins | v1.33.0 | 25 Apr 24 18:31 UTC |          |
	|         | -p download-only-587952        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 18:31:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 18:31:13.686019   13694 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:31:13.686471   13694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:31:13.686486   13694 out.go:304] Setting ErrFile to fd 2...
	I0425 18:31:13.686492   13694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:31:13.686939   13694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	W0425 18:31:13.687156   13694 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18757-6355/.minikube/config/config.json: open /home/jenkins/minikube-integration/18757-6355/.minikube/config/config.json: no such file or directory
	I0425 18:31:13.687945   13694 out.go:298] Setting JSON to true
	I0425 18:31:13.688804   13694 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":820,"bootTime":1714069054,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 18:31:13.688870   13694 start.go:139] virtualization: kvm guest
	I0425 18:31:13.691548   13694 out.go:97] [download-only-587952] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 18:31:13.693151   13694 out.go:169] MINIKUBE_LOCATION=18757
	W0425 18:31:13.691649   13694 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball: no such file or directory
	I0425 18:31:13.691702   13694 notify.go:220] Checking for updates...
	I0425 18:31:13.695935   13694 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 18:31:13.697500   13694 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:31:13.698995   13694 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:31:13.700375   13694 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0425 18:31:13.703085   13694 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0425 18:31:13.703307   13694 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 18:31:14.140334   13694 out.go:97] Using the kvm2 driver based on user configuration
	I0425 18:31:14.140370   13694 start.go:297] selected driver: kvm2
	I0425 18:31:14.140376   13694 start.go:901] validating driver "kvm2" against <nil>
	I0425 18:31:14.140714   13694 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:31:14.140881   13694 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 18:31:14.155687   13694 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 18:31:14.155745   13694 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 18:31:14.156261   13694 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0425 18:31:14.156412   13694 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0425 18:31:14.156438   13694 cni.go:84] Creating CNI manager for ""
	I0425 18:31:14.156445   13694 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 18:31:14.156456   13694 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 18:31:14.156508   13694 start.go:340] cluster config:
	{Name:download-only-587952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-587952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:31:14.156671   13694 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:31:14.158653   13694 out.go:97] Downloading VM boot image ...
	I0425 18:31:14.158687   13694 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18757-6355/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0425 18:31:23.794086   13694 out.go:97] Starting "download-only-587952" primary control-plane node in "download-only-587952" cluster
	I0425 18:31:23.794120   13694 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 18:31:23.906234   13694 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0425 18:31:23.906272   13694 cache.go:56] Caching tarball of preloaded images
	I0425 18:31:23.906444   13694 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 18:31:23.908221   13694 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0425 18:31:23.908241   13694 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0425 18:31:24.016293   13694 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0425 18:31:47.211197   13694 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0425 18:31:47.211299   13694 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0425 18:31:48.117918   13694 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0425 18:31:48.118341   13694 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/download-only-587952/config.json ...
	I0425 18:31:48.118374   13694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/download-only-587952/config.json: {Name:mkc1c91d403f7759bcd4d10a94ec74ae43ceac72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:31:48.118544   13694 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0425 18:31:48.118759   13694 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-587952 host does not exist
	  To start a cluster, run: "minikube start -p download-only-587952"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-587952
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (16.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-019320 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-019320 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.666551763s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (16.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-019320
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-019320: exit status 85 (68.646348ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-587952 | jenkins | v1.33.0 | 25 Apr 24 18:31 UTC |                     |
	|         | -p download-only-587952        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 25 Apr 24 18:31 UTC | 25 Apr 24 18:31 UTC |
	| delete  | -p download-only-587952        | download-only-587952 | jenkins | v1.33.0 | 25 Apr 24 18:31 UTC | 25 Apr 24 18:31 UTC |
	| start   | -o=json --download-only        | download-only-019320 | jenkins | v1.33.0 | 25 Apr 24 18:31 UTC |                     |
	|         | -p download-only-019320        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/25 18:31:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0425 18:31:50.966334   13981 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:31:50.966571   13981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:31:50.966581   13981 out.go:304] Setting ErrFile to fd 2...
	I0425 18:31:50.966587   13981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:31:50.966791   13981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:31:50.967338   13981 out.go:298] Setting JSON to true
	I0425 18:31:50.968143   13981 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":857,"bootTime":1714069054,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 18:31:50.968200   13981 start.go:139] virtualization: kvm guest
	I0425 18:31:50.970614   13981 out.go:97] [download-only-019320] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 18:31:50.972145   13981 out.go:169] MINIKUBE_LOCATION=18757
	I0425 18:31:50.970750   13981 notify.go:220] Checking for updates...
	I0425 18:31:50.974901   13981 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 18:31:50.976365   13981 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:31:50.977789   13981 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:31:50.979079   13981 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0425 18:31:50.981259   13981 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0425 18:31:50.981456   13981 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 18:31:51.015213   13981 out.go:97] Using the kvm2 driver based on user configuration
	I0425 18:31:51.015250   13981 start.go:297] selected driver: kvm2
	I0425 18:31:51.015256   13981 start.go:901] validating driver "kvm2" against <nil>
	I0425 18:31:51.015626   13981 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:31:51.015719   13981 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18757-6355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0425 18:31:51.030673   13981 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0425 18:31:51.030719   13981 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0425 18:31:51.031153   13981 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0425 18:31:51.031289   13981 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0425 18:31:51.031346   13981 cni.go:84] Creating CNI manager for ""
	I0425 18:31:51.031358   13981 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0425 18:31:51.031368   13981 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0425 18:31:51.031419   13981 start.go:340] cluster config:
	{Name:download-only-019320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-019320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:31:51.031548   13981 iso.go:125] acquiring lock: {Name:mk4deb53653b7b4f452836666338f58451eabad3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0425 18:31:51.033243   13981 out.go:97] Starting "download-only-019320" primary control-plane node in "download-only-019320" cluster
	I0425 18:31:51.033263   13981 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:31:51.541473   13981 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 18:31:51.541507   13981 cache.go:56] Caching tarball of preloaded images
	I0425 18:31:51.541689   13981 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:31:51.543542   13981 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0425 18:31:51.543579   13981 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0425 18:31:51.653173   13981 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5927bd9d05f26d08fc05540d1d92e5d8 -> /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0425 18:32:02.582841   13981 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0425 18:32:02.582925   13981 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18757-6355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0425 18:32:03.328732   13981 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0425 18:32:03.329052   13981 profile.go:143] Saving config to /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/download-only-019320/config.json ...
	I0425 18:32:03.329080   13981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/download-only-019320/config.json: {Name:mk4ff99bb22dee060c6a5fc0065199677f050975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0425 18:32:03.329226   13981 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0425 18:32:03.329363   13981 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18757-6355/.minikube/cache/linux/amd64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-019320 host does not exist
	  To start a cluster, run: "minikube start -p download-only-019320"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-019320
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-815806 --alsologtostderr --binary-mirror http://127.0.0.1:42043 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-815806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-815806
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (127.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-744375 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-744375 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m6.271430427s)
helpers_test.go:175: Cleaning up "offline-crio-744375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-744375
--- PASS: TestOffline (127.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-477322
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-477322: exit status 85 (65.045712ms)

                                                
                                                
-- stdout --
	* Profile "addons-477322" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-477322"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-477322
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-477322: exit status 85 (68.56808ms)

                                                
                                                
-- stdout --
	* Profile "addons-477322" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-477322"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (216.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-477322 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-477322 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m36.605325825s)
--- PASS: TestAddons/Setup (216.61s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 20.991451ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-wf47l" [0d3a67d8-466b-42fa-8b7b-e306fee91c84] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005982327s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vcjwf" [daff0d5c-8ea3-43fd-948e-5ac439d1a5a4] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004637686s
addons_test.go:340: (dbg) Run:  kubectl --context addons-477322 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-477322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-477322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.948651235s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 ip
2024/04/25 18:36:04 [DEBUG] GET http://192.168.39.239:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-477322 addons disable registry --alsologtostderr -v=1: (1.85281055s)
--- PASS: TestAddons/parallel/Registry (21.00s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.36s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5qxql" [e15df6d7-ef35-4eac-b0df-28a75292a12a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.066139395s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-477322
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-477322: (6.295928862s)
--- PASS: TestAddons/parallel/InspektorGadget (11.36s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (17.03s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 18.819433ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-dkd7m" [aa079112-30fb-4401-9271-cf4059a1c2ce] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.008065841s
addons_test.go:473: (dbg) Run:  kubectl --context addons-477322 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-477322 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.850412275s)
addons_test.go:478: kubectl --context addons-477322 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:473: (dbg) Run:  kubectl --context addons-477322 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-477322 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.971458372s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (17.03s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 5.391044ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-477322 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-477322 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b320bc79-5daa-4402-8e87-46e1ec67d8d5] Pending
helpers_test.go:344: "task-pv-pod" [b320bc79-5daa-4402-8e87-46e1ec67d8d5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b320bc79-5daa-4402-8e87-46e1ec67d8d5] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 22.004801684s
addons_test.go:584: (dbg) Run:  kubectl --context addons-477322 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-477322 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-477322 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-477322 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-477322 delete pod task-pv-pod: (1.220717273s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-477322 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-477322 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-477322 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4bd1cf66-5770-4f7f-96e4-0485e4ec741c] Pending
helpers_test.go:344: "task-pv-pod-restore" [4bd1cf66-5770-4f7f-96e4-0485e4ec741c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4bd1cf66-5770-4f7f-96e4-0485e4ec741c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004061573s
addons_test.go:626: (dbg) Run:  kubectl --context addons-477322 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-477322 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-477322 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-477322 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.935461257s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (45.96s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-477322 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-477322 --alsologtostderr -v=1: (1.223426313s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-4hdvs" [b0b1c3bf-f2b2-4b6a-ba59-104181e36d01] Pending
helpers_test.go:344: "headlamp-7559bf459f-4hdvs" [b0b1c3bf-f2b2-4b6a-ba59-104181e36d01] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-4hdvs" [b0b1c3bf-f2b2-4b6a-ba59-104181e36d01] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004301603s
--- PASS: TestAddons/parallel/Headlamp (15.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-9vwsx" [7bda8fee-d167-4140-a628-00fe2e7f7392] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.011892689s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-477322
addons_test.go:860: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-477322: (1.005818125s)
--- PASS: TestAddons/parallel/CloudSpanner (7.02s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (66.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-477322 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-477322 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d085958d-d83f-4a53-9706-c3e2a355e65a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d085958d-d83f-4a53-9706-c3e2a355e65a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d085958d-d83f-4a53-9706-c3e2a355e65a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 12.003536635s
addons_test.go:891: (dbg) Run:  kubectl --context addons-477322 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 ssh "cat /opt/local-path-provisioner/pvc-c6aa81f4-fb5f-4681-a571-2703b02db912_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-477322 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-477322 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-477322 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-477322 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.034311931s)
--- PASS: TestAddons/parallel/LocalPath (66.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4tmhd" [e5294b6c-a965-4df2-8c07-1696d3c1ea57] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005387576s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-477322
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.85s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-z4ljv" [3df1cc7b-c249-4597-b8c9-3a9b4bc48222] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004827603s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-477322 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-477322 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (47.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-548779 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-548779 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (45.786416376s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-548779 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-548779 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-548779 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-548779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-548779
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-548779: (1.015768208s)
--- PASS: TestCertOptions (47.27s)

                                                
                                    
x
+
TestCertExpiration (307.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-571974 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0425 19:43:19.378336   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 19:43:36.328785   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-571974 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m42.266695262s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-571974 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-571974 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (24.106365689s)
helpers_test.go:175: Cleaning up "cert-expiration-571974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-571974
--- PASS: TestCertExpiration (307.21s)

                                                
                                    
x
+
TestForceSystemdFlag (70.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-543895 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-543895 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.477010685s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-543895 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-543895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-543895
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-543895: (1.049229985s)
--- PASS: TestForceSystemdFlag (70.74s)

                                                
                                    
x
+
TestForceSystemdEnv (47.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-783271 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-783271 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.178782533s)
helpers_test.go:175: Cleaning up "force-systemd-env-783271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-783271
--- PASS: TestForceSystemdEnv (47.15s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.05s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.05s)

                                                
                                    
x
+
TestErrorSpam/setup (49.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-282789 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-282789 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-282789 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-282789 --driver=kvm2  --container-runtime=crio: (49.332134107s)
--- PASS: TestErrorSpam/setup (49.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (4.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 stop: (2.310811669s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 stop: (1.307656442s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-282789 --log_dir /tmp/nospam-282789 stop: (1.300825634s)
--- PASS: TestErrorSpam/stop (4.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18757-6355/.minikube/files/etc/test/nested/copy/13682/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-117423 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0425 18:45:45.439307   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:45:45.445022   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:45:45.455309   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:45:45.475572   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:45:45.515857   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:45:45.596348   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:45:45.756800   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:45:46.077427   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:45:46.718321   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:45:47.999434   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:45:50.560705   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:45:55.681472   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:46:05.921751   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:46:26.401939   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-117423 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m26.864343191s)
--- PASS: TestFunctional/serial/StartWithProxy (86.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-117423 --alsologtostderr -v=8
E0425 18:47:07.362547   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-117423 --alsologtostderr -v=8: (40.317155794s)
functional_test.go:659: soft start took 40.317837224s for "functional-117423" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-117423 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 cache add registry.k8s.io/pause:3.1: (1.027019214s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 cache add registry.k8s.io/pause:3.3: (1.161216993s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 cache add registry.k8s.io/pause:latest: (1.104006075s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-117423 /tmp/TestFunctionalserialCacheCmdcacheadd_local2774006965/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 cache add minikube-local-cache-test:functional-117423
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 cache add minikube-local-cache-test:functional-117423: (1.943072067s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 cache delete minikube-local-cache-test:functional-117423
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-117423
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-117423 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (225.825731ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 kubectl -- --context functional-117423 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-117423 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.66s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-117423 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-117423 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.659461771s)
functional_test.go:757: restart took 33.659587307s for "functional-117423" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.66s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-117423 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 logs: (1.625329812s)
--- PASS: TestFunctional/serial/LogsCmd (1.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 logs --file /tmp/TestFunctionalserialLogsFileCmd1741900821/001/logs.txt
E0425 18:48:29.283440   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 logs --file /tmp/TestFunctionalserialLogsFileCmd1741900821/001/logs.txt: (1.657646085s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-117423 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-117423
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-117423: exit status 115 (281.667419ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.139:30514 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-117423 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-117423 delete -f testdata/invalidsvc.yaml: (1.329328672s)
--- PASS: TestFunctional/serial/InvalidService (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-117423 config get cpus: exit status 14 (55.515481ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-117423 config get cpus: exit status 14 (53.190489ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-117423 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-117423 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23131: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.74s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-117423 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-117423 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (150.944235ms)

                                                
                                                
-- stdout --
	* [functional-117423] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18757
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:49:09.446650   23038 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:49:09.446865   23038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:49:09.446896   23038 out.go:304] Setting ErrFile to fd 2...
	I0425 18:49:09.446911   23038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:49:09.447198   23038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:49:09.447793   23038 out.go:298] Setting JSON to false
	I0425 18:49:09.448719   23038 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1895,"bootTime":1714069054,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 18:49:09.448781   23038 start.go:139] virtualization: kvm guest
	I0425 18:49:09.451029   23038 out.go:177] * [functional-117423] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 18:49:09.452287   23038 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 18:49:09.452300   23038 notify.go:220] Checking for updates...
	I0425 18:49:09.453496   23038 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 18:49:09.455230   23038 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:49:09.456541   23038 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:49:09.457910   23038 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 18:49:09.459763   23038 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 18:49:09.461560   23038 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:49:09.461971   23038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:49:09.462030   23038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:49:09.479854   23038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42339
	I0425 18:49:09.480246   23038 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:49:09.480906   23038 main.go:141] libmachine: Using API Version  1
	I0425 18:49:09.480925   23038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:49:09.481340   23038 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:49:09.481525   23038 main.go:141] libmachine: (functional-117423) Calling .DriverName
	I0425 18:49:09.481784   23038 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 18:49:09.482048   23038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:49:09.482080   23038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:49:09.497177   23038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43011
	I0425 18:49:09.497584   23038 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:49:09.498128   23038 main.go:141] libmachine: Using API Version  1
	I0425 18:49:09.498156   23038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:49:09.498496   23038 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:49:09.498718   23038 main.go:141] libmachine: (functional-117423) Calling .DriverName
	I0425 18:49:09.532976   23038 out.go:177] * Using the kvm2 driver based on existing profile
	I0425 18:49:09.534128   23038 start.go:297] selected driver: kvm2
	I0425 18:49:09.534144   23038 start.go:901] validating driver "kvm2" against &{Name:functional-117423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-117423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:49:09.534290   23038 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 18:49:09.536261   23038 out.go:177] 
	W0425 18:49:09.537635   23038 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0425 18:49:09.538893   23038 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-117423 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-117423 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-117423 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (165.252482ms)

                                                
                                                
-- stdout --
	* [functional-117423] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18757
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 18:49:09.286834   22996 out.go:291] Setting OutFile to fd 1 ...
	I0425 18:49:09.286965   22996 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:49:09.286977   22996 out.go:304] Setting ErrFile to fd 2...
	I0425 18:49:09.286983   22996 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 18:49:09.287327   22996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 18:49:09.287921   22996 out.go:298] Setting JSON to false
	I0425 18:49:09.289009   22996 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1895,"bootTime":1714069054,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 18:49:09.289072   22996 start.go:139] virtualization: kvm guest
	I0425 18:49:09.291586   22996 out.go:177] * [functional-117423] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0425 18:49:09.293305   22996 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 18:49:09.293310   22996 notify.go:220] Checking for updates...
	I0425 18:49:09.295991   22996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 18:49:09.297460   22996 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 18:49:09.298780   22996 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 18:49:09.300071   22996 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 18:49:09.301278   22996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 18:49:09.302998   22996 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 18:49:09.303387   22996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:49:09.303422   22996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:49:09.317861   22996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0425 18:49:09.318281   22996 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:49:09.318783   22996 main.go:141] libmachine: Using API Version  1
	I0425 18:49:09.318803   22996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:49:09.319119   22996 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:49:09.319288   22996 main.go:141] libmachine: (functional-117423) Calling .DriverName
	I0425 18:49:09.319814   22996 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 18:49:09.320313   22996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 18:49:09.320367   22996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 18:49:09.338546   22996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46173
	I0425 18:49:09.338961   22996 main.go:141] libmachine: () Calling .GetVersion
	I0425 18:49:09.339383   22996 main.go:141] libmachine: Using API Version  1
	I0425 18:49:09.339409   22996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 18:49:09.339922   22996 main.go:141] libmachine: () Calling .GetMachineName
	I0425 18:49:09.340098   22996 main.go:141] libmachine: (functional-117423) Calling .DriverName
	I0425 18:49:09.380701   22996 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0425 18:49:09.381958   22996 start.go:297] selected driver: kvm2
	I0425 18:49:09.381978   22996 start.go:901] validating driver "kvm2" against &{Name:functional-117423 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-117423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0425 18:49:09.382107   22996 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 18:49:09.384447   22996 out.go:177] 
	W0425 18:49:09.385940   22996 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0425 18:49:09.387248   22996 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-117423 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-117423 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-826rl" [fe1e0d9d-6a81-40e5-8f82-935da3327786] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-826rl" [fe1e0d9d-6a81-40e5-8f82-935da3327786] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 28.004590089s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.139:31272
functional_test.go:1671: http://192.168.39.139:31272: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-826rl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.139:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.139:31272
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (28.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (56.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3b886345-b919-4055-b9ba-fae2cbf87c64] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004703774s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-117423 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-117423 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-117423 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-117423 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-117423 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [26382562-defd-4be4-849d-31a7fe64e169] Pending
helpers_test.go:344: "sp-pod" [26382562-defd-4be4-849d-31a7fe64e169] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [26382562-defd-4be4-849d-31a7fe64e169] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.0076426s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-117423 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-117423 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-117423 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [09dd6eb9-912a-4911-bece-61412161f449] Pending
helpers_test.go:344: "sp-pod" [09dd6eb9-912a-4911-bece-61412161f449] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [09dd6eb9-912a-4911-bece-61412161f449] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.004942158s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-117423 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (56.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh -n functional-117423 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 cp functional-117423:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3126370412/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh -n functional-117423 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh -n functional-117423 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-117423 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-ftv56" [d4fe2f96-9cc8-4410-bc61-acdf331146dc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-ftv56" [d4fe2f96-9cc8-4410-bc61-acdf331146dc] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.008056499s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-117423 exec mysql-64454c8b5c-ftv56 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-117423 exec mysql-64454c8b5c-ftv56 -- mysql -ppassword -e "show databases;": exit status 1 (193.04584ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-117423 exec mysql-64454c8b5c-ftv56 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-117423 exec mysql-64454c8b5c-ftv56 -- mysql -ppassword -e "show databases;": exit status 1 (245.643254ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-117423 exec mysql-64454c8b5c-ftv56 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-117423 exec mysql-64454c8b5c-ftv56 -- mysql -ppassword -e "show databases;": exit status 1 (357.687174ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-117423 exec mysql-64454c8b5c-ftv56 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/13682/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "sudo cat /etc/test/nested/copy/13682/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/13682.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "sudo cat /etc/ssl/certs/13682.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/13682.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "sudo cat /usr/share/ca-certificates/13682.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/136822.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "sudo cat /etc/ssl/certs/136822.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/136822.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "sudo cat /usr/share/ca-certificates/136822.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-117423 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-117423 ssh "sudo systemctl is-active docker": exit status 1 (233.849602ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-117423 ssh "sudo systemctl is-active containerd": exit status 1 (218.900531ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-117423 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-117423
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-117423
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-117423 image ls --format short --alsologtostderr:
I0425 18:49:18.619217   23719 out.go:291] Setting OutFile to fd 1 ...
I0425 18:49:18.619342   23719 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:18.619352   23719 out.go:304] Setting ErrFile to fd 2...
I0425 18:49:18.619356   23719 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:18.619556   23719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
I0425 18:49:18.620106   23719 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0425 18:49:18.620200   23719 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0425 18:49:18.620534   23719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:18.620569   23719 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:18.635138   23719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39917
I0425 18:49:18.635664   23719 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:18.636189   23719 main.go:141] libmachine: Using API Version  1
I0425 18:49:18.636213   23719 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:18.636532   23719 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:18.636708   23719 main.go:141] libmachine: (functional-117423) Calling .GetState
I0425 18:49:18.638618   23719 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:18.638654   23719 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:18.658026   23719 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
I0425 18:49:18.658567   23719 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:18.659124   23719 main.go:141] libmachine: Using API Version  1
I0425 18:49:18.659157   23719 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:18.659572   23719 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:18.665710   23719 main.go:141] libmachine: (functional-117423) Calling .DriverName
I0425 18:49:18.665924   23719 ssh_runner.go:195] Run: systemctl --version
I0425 18:49:18.665958   23719 main.go:141] libmachine: (functional-117423) Calling .GetSSHHostname
I0425 18:49:18.677233   23719 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:18.677780   23719 main.go:141] libmachine: (functional-117423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:37:c2", ip: ""} in network mk-functional-117423: {Iface:virbr1 ExpiryTime:2024-04-25 19:45:54 +0000 UTC Type:0 Mac:52:54:00:90:37:c2 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-117423 Clientid:01:52:54:00:90:37:c2}
I0425 18:49:18.677816   23719 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined IP address 192.168.39.139 and MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:18.677942   23719 main.go:141] libmachine: (functional-117423) Calling .GetSSHPort
I0425 18:49:18.678176   23719 main.go:141] libmachine: (functional-117423) Calling .GetSSHKeyPath
I0425 18:49:18.678367   23719 main.go:141] libmachine: (functional-117423) Calling .GetSSHUsername
I0425 18:49:18.678526   23719 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/functional-117423/id_rsa Username:docker}
I0425 18:49:18.798289   23719 ssh_runner.go:195] Run: sudo crictl images --output json
I0425 18:49:18.887983   23719 main.go:141] libmachine: Making call to close driver server
I0425 18:49:18.888021   23719 main.go:141] libmachine: (functional-117423) Calling .Close
I0425 18:49:18.888299   23719 main.go:141] libmachine: Successfully made call to close driver server
I0425 18:49:18.888314   23719 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 18:49:18.888327   23719 main.go:141] libmachine: Making call to close driver server
I0425 18:49:18.888334   23719 main.go:141] libmachine: (functional-117423) Calling .Close
I0425 18:49:18.888543   23719 main.go:141] libmachine: Successfully made call to close driver server
I0425 18:49:18.888561   23719 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-117423 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| localhost/minikube-local-cache-test     | functional-117423  | 7485ae6708653 | 3.33kB |
| gcr.io/google-containers/addon-resizer  | functional-117423  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.30.0            | c42f13656d0b2 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.0            | c7aad43836fa5 | 112MB  |
| registry.k8s.io/kube-proxy              | v1.30.0            | a0bf559e280cf | 85.9MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 7383c266ef252 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 259c8277fcbbc | 63MB   |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-117423 image ls --format table --alsologtostderr:
I0425 18:49:21.927019   24045 out.go:291] Setting OutFile to fd 1 ...
I0425 18:49:21.927125   24045 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:21.927136   24045 out.go:304] Setting ErrFile to fd 2...
I0425 18:49:21.927141   24045 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:21.927337   24045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
I0425 18:49:21.927918   24045 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0425 18:49:21.928008   24045 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0425 18:49:21.928361   24045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:21.928398   24045 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:21.942992   24045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33487
I0425 18:49:21.943413   24045 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:21.944005   24045 main.go:141] libmachine: Using API Version  1
I0425 18:49:21.944030   24045 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:21.944345   24045 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:21.944514   24045 main.go:141] libmachine: (functional-117423) Calling .GetState
I0425 18:49:21.946283   24045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:21.946321   24045 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:21.960197   24045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
I0425 18:49:21.960582   24045 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:21.961082   24045 main.go:141] libmachine: Using API Version  1
I0425 18:49:21.961104   24045 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:21.961373   24045 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:21.961552   24045 main.go:141] libmachine: (functional-117423) Calling .DriverName
I0425 18:49:21.961747   24045 ssh_runner.go:195] Run: systemctl --version
I0425 18:49:21.961771   24045 main.go:141] libmachine: (functional-117423) Calling .GetSSHHostname
I0425 18:49:21.964387   24045 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:21.964801   24045 main.go:141] libmachine: (functional-117423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:37:c2", ip: ""} in network mk-functional-117423: {Iface:virbr1 ExpiryTime:2024-04-25 19:45:54 +0000 UTC Type:0 Mac:52:54:00:90:37:c2 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-117423 Clientid:01:52:54:00:90:37:c2}
I0425 18:49:21.964836   24045 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined IP address 192.168.39.139 and MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:21.964939   24045 main.go:141] libmachine: (functional-117423) Calling .GetSSHPort
I0425 18:49:21.965100   24045 main.go:141] libmachine: (functional-117423) Calling .GetSSHKeyPath
I0425 18:49:21.965251   24045 main.go:141] libmachine: (functional-117423) Calling .GetSSHUsername
I0425 18:49:21.965375   24045 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/functional-117423/id_rsa Username:docker}
I0425 18:49:22.049189   24045 ssh_runner.go:195] Run: sudo crictl images --output json
I0425 18:49:22.090965   24045 main.go:141] libmachine: Making call to close driver server
I0425 18:49:22.090979   24045 main.go:141] libmachine: (functional-117423) Calling .Close
I0425 18:49:22.091251   24045 main.go:141] libmachine: Successfully made call to close driver server
I0425 18:49:22.091270   24045 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 18:49:22.091286   24045 main.go:141] libmachine: Making call to close driver server
I0425 18:49:22.091289   24045 main.go:141] libmachine: (functional-117423) DBG | Closing plugin on server side
I0425 18:49:22.091294   24045 main.go:141] libmachine: (functional-117423) Calling .Close
I0425 18:49:22.091608   24045 main.go:141] libmachine: Successfully made call to close driver server
I0425 18:49:22.091626   24045 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 18:49:22.091626   24045 main.go:141] libmachine: (functional-117423) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-117423 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-117423"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/stora
ge-provisioner:v5"],"size":"31470524"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117609952"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67","registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"63026502"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975
d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":["docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8","docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"191760844"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["regis
try.k8s.io/pause:latest"],"size":"247077"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"7485ae67086537171a1cc85c4d1cd1dbe395c19878459cde21377517bd56f428","repoDigests":["localhost/minikube-local-cache-test@sha256:330083bd2ef5f7d623d146fc5be915ac5ae5a544aae4472067ec8f64530b4472"],"repoTags":["localhost/minikube-local-cache-test:functional-117423"],"size":"3330"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubern
etesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","regi
stry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"112170310"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"85932953"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registr
y.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-117423 image ls --format json --alsologtostderr:
I0425 18:49:21.700609   24022 out.go:291] Setting OutFile to fd 1 ...
I0425 18:49:21.700856   24022 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:21.700865   24022 out.go:304] Setting ErrFile to fd 2...
I0425 18:49:21.700869   24022 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:21.701069   24022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
I0425 18:49:21.701632   24022 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0425 18:49:21.701720   24022 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0425 18:49:21.702080   24022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:21.702123   24022 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:21.716978   24022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
I0425 18:49:21.717440   24022 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:21.718074   24022 main.go:141] libmachine: Using API Version  1
I0425 18:49:21.718103   24022 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:21.718448   24022 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:21.718641   24022 main.go:141] libmachine: (functional-117423) Calling .GetState
I0425 18:49:21.720673   24022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:21.720724   24022 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:21.735605   24022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
I0425 18:49:21.736064   24022 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:21.736626   24022 main.go:141] libmachine: Using API Version  1
I0425 18:49:21.736667   24022 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:21.737062   24022 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:21.737238   24022 main.go:141] libmachine: (functional-117423) Calling .DriverName
I0425 18:49:21.737455   24022 ssh_runner.go:195] Run: systemctl --version
I0425 18:49:21.737482   24022 main.go:141] libmachine: (functional-117423) Calling .GetSSHHostname
I0425 18:49:21.739944   24022 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:21.740343   24022 main.go:141] libmachine: (functional-117423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:37:c2", ip: ""} in network mk-functional-117423: {Iface:virbr1 ExpiryTime:2024-04-25 19:45:54 +0000 UTC Type:0 Mac:52:54:00:90:37:c2 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-117423 Clientid:01:52:54:00:90:37:c2}
I0425 18:49:21.740375   24022 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined IP address 192.168.39.139 and MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:21.740507   24022 main.go:141] libmachine: (functional-117423) Calling .GetSSHPort
I0425 18:49:21.740719   24022 main.go:141] libmachine: (functional-117423) Calling .GetSSHKeyPath
I0425 18:49:21.740924   24022 main.go:141] libmachine: (functional-117423) Calling .GetSSHUsername
I0425 18:49:21.741081   24022 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/functional-117423/id_rsa Username:docker}
I0425 18:49:21.825566   24022 ssh_runner.go:195] Run: sudo crictl images --output json
I0425 18:49:21.870512   24022 main.go:141] libmachine: Making call to close driver server
I0425 18:49:21.870529   24022 main.go:141] libmachine: (functional-117423) Calling .Close
I0425 18:49:21.870780   24022 main.go:141] libmachine: Successfully made call to close driver server
I0425 18:49:21.870794   24022 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 18:49:21.870802   24022 main.go:141] libmachine: Making call to close driver server
I0425 18:49:21.870809   24022 main.go:141] libmachine: (functional-117423) Calling .Close
I0425 18:49:21.871022   24022 main.go:141] libmachine: Successfully made call to close driver server
I0425 18:49:21.871043   24022 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 18:49:21.871113   24022 main.go:141] libmachine: (functional-117423) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-117423 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "85932953"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117609952"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests:
- docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8
- docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee
repoTags:
- docker.io/library/nginx:latest
size: "191760844"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-117423
size: "34114467"
- id: 7485ae67086537171a1cc85c4d1cd1dbe395c19878459cde21377517bd56f428
repoDigests:
- localhost/minikube-local-cache-test@sha256:330083bd2ef5f7d623d146fc5be915ac5ae5a544aae4472067ec8f64530b4472
repoTags:
- localhost/minikube-local-cache-test:functional-117423
size: "3330"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "112170310"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
- registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "63026502"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-117423 image ls --format yaml --alsologtostderr:
I0425 18:49:18.950634   23841 out.go:291] Setting OutFile to fd 1 ...
I0425 18:49:18.950761   23841 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:18.950774   23841 out.go:304] Setting ErrFile to fd 2...
I0425 18:49:18.950781   23841 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:18.950982   23841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
I0425 18:49:18.951556   23841 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0425 18:49:18.951653   23841 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0425 18:49:18.952029   23841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:18.952063   23841 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:18.967396   23841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
I0425 18:49:18.967834   23841 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:18.968400   23841 main.go:141] libmachine: Using API Version  1
I0425 18:49:18.968421   23841 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:18.968828   23841 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:18.969062   23841 main.go:141] libmachine: (functional-117423) Calling .GetState
I0425 18:49:18.971624   23841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:18.971673   23841 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:18.986165   23841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
I0425 18:49:18.986627   23841 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:18.987108   23841 main.go:141] libmachine: Using API Version  1
I0425 18:49:18.987129   23841 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:18.987513   23841 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:18.987682   23841 main.go:141] libmachine: (functional-117423) Calling .DriverName
I0425 18:49:18.987916   23841 ssh_runner.go:195] Run: systemctl --version
I0425 18:49:18.987941   23841 main.go:141] libmachine: (functional-117423) Calling .GetSSHHostname
I0425 18:49:18.990636   23841 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:18.991052   23841 main.go:141] libmachine: (functional-117423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:37:c2", ip: ""} in network mk-functional-117423: {Iface:virbr1 ExpiryTime:2024-04-25 19:45:54 +0000 UTC Type:0 Mac:52:54:00:90:37:c2 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-117423 Clientid:01:52:54:00:90:37:c2}
I0425 18:49:18.991077   23841 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined IP address 192.168.39.139 and MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:18.991238   23841 main.go:141] libmachine: (functional-117423) Calling .GetSSHPort
I0425 18:49:18.991422   23841 main.go:141] libmachine: (functional-117423) Calling .GetSSHKeyPath
I0425 18:49:18.991568   23841 main.go:141] libmachine: (functional-117423) Calling .GetSSHUsername
I0425 18:49:18.991722   23841 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/functional-117423/id_rsa Username:docker}
I0425 18:49:19.138640   23841 ssh_runner.go:195] Run: sudo crictl images --output json
I0425 18:49:19.228133   23841 main.go:141] libmachine: Making call to close driver server
I0425 18:49:19.228145   23841 main.go:141] libmachine: (functional-117423) Calling .Close
I0425 18:49:19.228438   23841 main.go:141] libmachine: (functional-117423) DBG | Closing plugin on server side
I0425 18:49:19.228470   23841 main.go:141] libmachine: Successfully made call to close driver server
I0425 18:49:19.228478   23841 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 18:49:19.228510   23841 main.go:141] libmachine: Making call to close driver server
I0425 18:49:19.228522   23841 main.go:141] libmachine: (functional-117423) Calling .Close
I0425 18:49:19.228742   23841 main.go:141] libmachine: Successfully made call to close driver server
I0425 18:49:19.228758   23841 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-117423 ssh pgrep buildkitd: exit status 1 (238.862807ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image build -t localhost/my-image:functional-117423 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 image build -t localhost/my-image:functional-117423 testdata/build --alsologtostderr: (4.689244248s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-117423 image build -t localhost/my-image:functional-117423 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9c7484e4c18
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-117423
--> c5dc55d65ba
Successfully tagged localhost/my-image:functional-117423
c5dc55d65ba996959415b7424c21acfb6ef46332af1f25f2b394235baf80e25e
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-117423 image build -t localhost/my-image:functional-117423 testdata/build --alsologtostderr:
I0425 18:49:19.536564   23895 out.go:291] Setting OutFile to fd 1 ...
I0425 18:49:19.536729   23895 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:19.536739   23895 out.go:304] Setting ErrFile to fd 2...
I0425 18:49:19.536744   23895 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0425 18:49:19.536939   23895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
I0425 18:49:19.537474   23895 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0425 18:49:19.538238   23895 config.go:182] Loaded profile config "functional-117423": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0425 18:49:19.538790   23895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:19.538836   23895 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:19.553657   23895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
I0425 18:49:19.554125   23895 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:19.554737   23895 main.go:141] libmachine: Using API Version  1
I0425 18:49:19.554774   23895 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:19.555111   23895 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:19.555274   23895 main.go:141] libmachine: (functional-117423) Calling .GetState
I0425 18:49:19.557130   23895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0425 18:49:19.557182   23895 main.go:141] libmachine: Launching plugin server for driver kvm2
I0425 18:49:19.572615   23895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
I0425 18:49:19.573037   23895 main.go:141] libmachine: () Calling .GetVersion
I0425 18:49:19.573562   23895 main.go:141] libmachine: Using API Version  1
I0425 18:49:19.573585   23895 main.go:141] libmachine: () Calling .SetConfigRaw
I0425 18:49:19.573874   23895 main.go:141] libmachine: () Calling .GetMachineName
I0425 18:49:19.574057   23895 main.go:141] libmachine: (functional-117423) Calling .DriverName
I0425 18:49:19.574268   23895 ssh_runner.go:195] Run: systemctl --version
I0425 18:49:19.574293   23895 main.go:141] libmachine: (functional-117423) Calling .GetSSHHostname
I0425 18:49:19.577595   23895 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:19.578072   23895 main.go:141] libmachine: (functional-117423) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:37:c2", ip: ""} in network mk-functional-117423: {Iface:virbr1 ExpiryTime:2024-04-25 19:45:54 +0000 UTC Type:0 Mac:52:54:00:90:37:c2 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-117423 Clientid:01:52:54:00:90:37:c2}
I0425 18:49:19.578100   23895 main.go:141] libmachine: (functional-117423) DBG | domain functional-117423 has defined IP address 192.168.39.139 and MAC address 52:54:00:90:37:c2 in network mk-functional-117423
I0425 18:49:19.578290   23895 main.go:141] libmachine: (functional-117423) Calling .GetSSHPort
I0425 18:49:19.578490   23895 main.go:141] libmachine: (functional-117423) Calling .GetSSHKeyPath
I0425 18:49:19.578677   23895 main.go:141] libmachine: (functional-117423) Calling .GetSSHUsername
I0425 18:49:19.578847   23895 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/functional-117423/id_rsa Username:docker}
I0425 18:49:19.714054   23895 build_images.go:161] Building image from path: /tmp/build.1961120043.tar
I0425 18:49:19.714125   23895 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0425 18:49:19.733798   23895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1961120043.tar
I0425 18:49:19.740507   23895 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1961120043.tar: stat -c "%s %y" /var/lib/minikube/build/build.1961120043.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1961120043.tar': No such file or directory
I0425 18:49:19.740548   23895 ssh_runner.go:362] scp /tmp/build.1961120043.tar --> /var/lib/minikube/build/build.1961120043.tar (3072 bytes)
I0425 18:49:19.789742   23895 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1961120043
I0425 18:49:19.804306   23895 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1961120043 -xf /var/lib/minikube/build/build.1961120043.tar
I0425 18:49:19.832228   23895 crio.go:315] Building image: /var/lib/minikube/build/build.1961120043
I0425 18:49:19.832285   23895 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-117423 /var/lib/minikube/build/build.1961120043 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0425 18:49:24.127959   23895 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-117423 /var/lib/minikube/build/build.1961120043 --cgroup-manager=cgroupfs: (4.295644535s)
I0425 18:49:24.128027   23895 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1961120043
I0425 18:49:24.144931   23895 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1961120043.tar
I0425 18:49:24.157673   23895 build_images.go:217] Built localhost/my-image:functional-117423 from /tmp/build.1961120043.tar
I0425 18:49:24.157709   23895 build_images.go:133] succeeded building to: functional-117423
I0425 18:49:24.157713   23895 build_images.go:134] failed building to: 
I0425 18:49:24.157736   23895 main.go:141] libmachine: Making call to close driver server
I0425 18:49:24.157744   23895 main.go:141] libmachine: (functional-117423) Calling .Close
I0425 18:49:24.158023   23895 main.go:141] libmachine: Successfully made call to close driver server
I0425 18:49:24.158043   23895 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 18:49:24.158048   23895 main.go:141] libmachine: (functional-117423) DBG | Closing plugin on server side
I0425 18:49:24.158050   23895 main.go:141] libmachine: Making call to close driver server
I0425 18:49:24.158083   23895 main.go:141] libmachine: (functional-117423) Calling .Close
I0425 18:49:24.158306   23895 main.go:141] libmachine: Successfully made call to close driver server
I0425 18:49:24.158320   23895 main.go:141] libmachine: Making call to close connection to plugin binary
I0425 18:49:24.158338   23895 main.go:141] libmachine: (functional-117423) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image ls
2024/04/25 18:49:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.077773822s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-117423
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image load --daemon gcr.io/google-containers/addon-resizer:functional-117423 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 image load --daemon gcr.io/google-containers/addon-resizer:functional-117423 --alsologtostderr: (4.791073433s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image load --daemon gcr.io/google-containers/addon-resizer:functional-117423 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 image load --daemon gcr.io/google-containers/addon-resizer:functional-117423 --alsologtostderr: (4.394151658s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.474329682s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-117423
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image load --daemon gcr.io/google-containers/addon-resizer:functional-117423 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 image load --daemon gcr.io/google-containers/addon-resizer:functional-117423 --alsologtostderr: (6.217280663s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 image ls: (1.991943525s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image save gcr.io/google-containers/addon-resizer:functional-117423 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 image save gcr.io/google-containers/addon-resizer:functional-117423 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.845816648s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image rm gcr.io/google-containers/addon-resizer:functional-117423 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.905493787s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-117423
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 image save --daemon gcr.io/google-containers/addon-resizer:functional-117423 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 image save --daemon gcr.io/google-containers/addon-resizer:functional-117423 --alsologtostderr: (4.93338774s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-117423
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-117423 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-117423 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-csrjx" [3599e71c-b559-4129-a930-14ab00d86415] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-csrjx" [3599e71c-b559-4129-a930-14ab00d86415] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004776921s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "422.040014ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "59.502501ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "350.78547ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "62.47285ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdany-port1543490249/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714070948055114588" to /tmp/TestFunctionalparallelMountCmdany-port1543490249/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714070948055114588" to /tmp/TestFunctionalparallelMountCmdany-port1543490249/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714070948055114588" to /tmp/TestFunctionalparallelMountCmdany-port1543490249/001/test-1714070948055114588
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-117423 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.584783ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 25 18:49 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 25 18:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 25 18:49 test-1714070948055114588
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh cat /mount-9p/test-1714070948055114588
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-117423 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8812fb30-5bbe-4261-bf5c-fd3e40b5e902] Pending
helpers_test.go:344: "busybox-mount" [8812fb30-5bbe-4261-bf5c-fd3e40b5e902] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8812fb30-5bbe-4261-bf5c-fd3e40b5e902] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8812fb30-5bbe-4261-bf5c-fd3e40b5e902] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004930667s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-117423 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdany-port1543490249/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 service list: (1.230963846s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-117423 service list -o json: (1.2589088s)
functional_test.go:1490: Took "1.259009477s" to run "out/minikube-linux-amd64 -p functional-117423 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.139:30770
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdspecific-port3868802293/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-117423 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (269.591771ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdspecific-port3868802293/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-117423 ssh "sudo umount -f /mount-9p": exit status 1 (239.149601ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-117423 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-117423 /tmp/TestFunctionalparallelMountCmdspecific-port3868802293/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-117423 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.139:30770
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-117423
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-117423
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-117423
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (268.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-912667 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0425 18:50:45.439176   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:51:13.126038   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 18:53:36.328102   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:36.333467   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:36.343767   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:36.364022   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:36.404325   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:36.484705   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:36.645086   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:36.965630   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:37.606539   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:38.887408   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:41.448268   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:46.568989   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:53:56.809839   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-912667 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m28.137971998s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (268.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-912667 -- rollout status deployment/busybox: (5.940521853s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-6lkjg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-nxhjn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-tcxzk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-6lkjg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-nxhjn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-tcxzk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-6lkjg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-nxhjn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-tcxzk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-6lkjg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-6lkjg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-nxhjn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-nxhjn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-tcxzk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-912667 -- exec busybox-fc5497c4f-tcxzk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-912667 -v=7 --alsologtostderr
E0425 18:54:17.290709   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 18:54:58.251203   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-912667 -v=7 --alsologtostderr: (47.390814862s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-912667 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp testdata/cp-test.txt ha-912667:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile60710412/001/cp-test_ha-912667.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667:/home/docker/cp-test.txt ha-912667-m02:/home/docker/cp-test_ha-912667_ha-912667-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m02 "sudo cat /home/docker/cp-test_ha-912667_ha-912667-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667:/home/docker/cp-test.txt ha-912667-m03:/home/docker/cp-test_ha-912667_ha-912667-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m03 "sudo cat /home/docker/cp-test_ha-912667_ha-912667-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667:/home/docker/cp-test.txt ha-912667-m04:/home/docker/cp-test_ha-912667_ha-912667-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m04 "sudo cat /home/docker/cp-test_ha-912667_ha-912667-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp testdata/cp-test.txt ha-912667-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile60710412/001/cp-test_ha-912667-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m02:/home/docker/cp-test.txt ha-912667:/home/docker/cp-test_ha-912667-m02_ha-912667.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667 "sudo cat /home/docker/cp-test_ha-912667-m02_ha-912667.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m02:/home/docker/cp-test.txt ha-912667-m03:/home/docker/cp-test_ha-912667-m02_ha-912667-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m03 "sudo cat /home/docker/cp-test_ha-912667-m02_ha-912667-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m02:/home/docker/cp-test.txt ha-912667-m04:/home/docker/cp-test_ha-912667-m02_ha-912667-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m04 "sudo cat /home/docker/cp-test_ha-912667-m02_ha-912667-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp testdata/cp-test.txt ha-912667-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile60710412/001/cp-test_ha-912667-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt ha-912667:/home/docker/cp-test_ha-912667-m03_ha-912667.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667 "sudo cat /home/docker/cp-test_ha-912667-m03_ha-912667.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt ha-912667-m02:/home/docker/cp-test_ha-912667-m03_ha-912667-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m02 "sudo cat /home/docker/cp-test_ha-912667-m03_ha-912667-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m03:/home/docker/cp-test.txt ha-912667-m04:/home/docker/cp-test_ha-912667-m03_ha-912667-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m04 "sudo cat /home/docker/cp-test_ha-912667-m03_ha-912667-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp testdata/cp-test.txt ha-912667-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile60710412/001/cp-test_ha-912667-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt ha-912667:/home/docker/cp-test_ha-912667-m04_ha-912667.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667 "sudo cat /home/docker/cp-test_ha-912667-m04_ha-912667.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt ha-912667-m02:/home/docker/cp-test_ha-912667-m04_ha-912667-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m02 "sudo cat /home/docker/cp-test_ha-912667-m04_ha-912667-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 cp ha-912667-m04:/home/docker/cp-test.txt ha-912667-m03:/home/docker/cp-test_ha-912667-m04_ha-912667-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 ssh -n ha-912667-m03 "sudo cat /home/docker/cp-test_ha-912667-m04_ha-912667-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.480655112s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-912667 node delete m03 -v=7 --alsologtostderr: (16.755169253s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (379.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-912667 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0425 19:08:36.328573   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 19:09:59.374725   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 19:10:45.439104   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
E0425 19:13:36.328784   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-912667 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m18.653422465s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (379.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-912667 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-912667 --control-plane -v=7 --alsologtostderr: (1m15.501520367s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-912667 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-105671 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0425 19:15:45.438610   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-105671 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.985057609s)
--- PASS: TestJSONOutput/start/Command (98.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-105671 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-105671 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-105671 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-105671 --output=json --user=testUser: (7.38366015s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-286847 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-286847 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.016359ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6c47669f-651a-48b5-8c69-5d4fded2225c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-286847] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5bd98c10-0155-48ed-a719-52add3fea5c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18757"}}
	{"specversion":"1.0","id":"2e9453de-f4a7-4c2f-adcf-c50a2e04e9d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e58346d7-c80e-4351-810b-60dd1fd1d4fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig"}}
	{"specversion":"1.0","id":"4b2fa40d-6212-49ac-b268-43434e681f9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube"}}
	{"specversion":"1.0","id":"72534755-5812-4925-b851-3c411e4ef900","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f98cee02-2ae7-4bfa-9dae-f4aa4807ff98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"854517f5-cd01-4de0-a37d-a286b6e1b73e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-286847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-286847
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (93.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-713895 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-713895 --driver=kvm2  --container-runtime=crio: (46.690777891s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-716429 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-716429 --driver=kvm2  --container-runtime=crio: (44.511788192s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-713895
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-716429
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-716429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-716429
helpers_test.go:175: Cleaning up "first-713895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-713895
--- PASS: TestMinikubeProfile (93.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-253519 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0425 19:18:36.328602   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 19:18:48.488403   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-253519 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.094273026s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-253519 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-253519 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-269363 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-269363 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.568703414s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269363 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269363 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-253519 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269363 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269363 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.66s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-269363
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-269363: (1.507215672s)
--- PASS: TestMountStart/serial/Stop (1.51s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-269363
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-269363: (21.92907475s)
--- PASS: TestMountStart/serial/RestartStopped (22.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269363 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-269363 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-857482 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0425 19:20:45.439263   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-857482 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m45.314449331s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-857482 -- rollout status deployment/busybox: (3.855342175s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- exec busybox-fc5497c4f-5nvcd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- exec busybox-fc5497c4f-b4tqk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- exec busybox-fc5497c4f-5nvcd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- exec busybox-fc5497c4f-b4tqk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- exec busybox-fc5497c4f-5nvcd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- exec busybox-fc5497c4f-b4tqk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- exec busybox-fc5497c4f-5nvcd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- exec busybox-fc5497c4f-5nvcd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- exec busybox-fc5497c4f-b4tqk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-857482 -- exec busybox-fc5497c4f-b4tqk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-857482 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-857482 -v 3 --alsologtostderr: (42.416025559s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-857482 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp testdata/cp-test.txt multinode-857482:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp multinode-857482:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile932174876/001/cp-test_multinode-857482.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp multinode-857482:/home/docker/cp-test.txt multinode-857482-m02:/home/docker/cp-test_multinode-857482_multinode-857482-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m02 "sudo cat /home/docker/cp-test_multinode-857482_multinode-857482-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp multinode-857482:/home/docker/cp-test.txt multinode-857482-m03:/home/docker/cp-test_multinode-857482_multinode-857482-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m03 "sudo cat /home/docker/cp-test_multinode-857482_multinode-857482-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp testdata/cp-test.txt multinode-857482-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp multinode-857482-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile932174876/001/cp-test_multinode-857482-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp multinode-857482-m02:/home/docker/cp-test.txt multinode-857482:/home/docker/cp-test_multinode-857482-m02_multinode-857482.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482 "sudo cat /home/docker/cp-test_multinode-857482-m02_multinode-857482.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp multinode-857482-m02:/home/docker/cp-test.txt multinode-857482-m03:/home/docker/cp-test_multinode-857482-m02_multinode-857482-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m03 "sudo cat /home/docker/cp-test_multinode-857482-m02_multinode-857482-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp testdata/cp-test.txt multinode-857482-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp multinode-857482-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile932174876/001/cp-test_multinode-857482-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp multinode-857482-m03:/home/docker/cp-test.txt multinode-857482:/home/docker/cp-test_multinode-857482-m03_multinode-857482.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482 "sudo cat /home/docker/cp-test_multinode-857482-m03_multinode-857482.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 cp multinode-857482-m03:/home/docker/cp-test.txt multinode-857482-m02:/home/docker/cp-test_multinode-857482-m03_multinode-857482-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 ssh -n multinode-857482-m02 "sudo cat /home/docker/cp-test_multinode-857482-m03_multinode-857482-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-857482 node stop m03: (2.291455602s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-857482 status: exit status 7 (449.341342ms)

                                                
                                                
-- stdout --
	multinode-857482
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-857482-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-857482-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-857482 status --alsologtostderr: exit status 7 (439.002496ms)

                                                
                                                
-- stdout --
	multinode-857482
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-857482-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-857482-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 19:22:43.026146   42246 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:22:43.026288   42246 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:22:43.026298   42246 out.go:304] Setting ErrFile to fd 2...
	I0425 19:22:43.026302   42246 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:22:43.026474   42246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:22:43.026637   42246 out.go:298] Setting JSON to false
	I0425 19:22:43.026662   42246 mustload.go:65] Loading cluster: multinode-857482
	I0425 19:22:43.026714   42246 notify.go:220] Checking for updates...
	I0425 19:22:43.027200   42246 config.go:182] Loaded profile config "multinode-857482": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:22:43.027223   42246 status.go:255] checking status of multinode-857482 ...
	I0425 19:22:43.027668   42246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:22:43.027725   42246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:22:43.044725   42246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33279
	I0425 19:22:43.045252   42246 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:22:43.045861   42246 main.go:141] libmachine: Using API Version  1
	I0425 19:22:43.045908   42246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:22:43.046288   42246 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:22:43.046489   42246 main.go:141] libmachine: (multinode-857482) Calling .GetState
	I0425 19:22:43.048120   42246 status.go:330] multinode-857482 host status = "Running" (err=<nil>)
	I0425 19:22:43.048136   42246 host.go:66] Checking if "multinode-857482" exists ...
	I0425 19:22:43.048423   42246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:22:43.048456   42246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:22:43.063566   42246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39895
	I0425 19:22:43.064033   42246 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:22:43.064520   42246 main.go:141] libmachine: Using API Version  1
	I0425 19:22:43.064538   42246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:22:43.064811   42246 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:22:43.065012   42246 main.go:141] libmachine: (multinode-857482) Calling .GetIP
	I0425 19:22:43.068153   42246 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:22:43.068543   42246 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:22:43.068572   42246 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:22:43.068778   42246 host.go:66] Checking if "multinode-857482" exists ...
	I0425 19:22:43.069116   42246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:22:43.069157   42246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:22:43.084883   42246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0425 19:22:43.085366   42246 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:22:43.085835   42246 main.go:141] libmachine: Using API Version  1
	I0425 19:22:43.085861   42246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:22:43.086123   42246 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:22:43.086336   42246 main.go:141] libmachine: (multinode-857482) Calling .DriverName
	I0425 19:22:43.086523   42246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 19:22:43.086562   42246 main.go:141] libmachine: (multinode-857482) Calling .GetSSHHostname
	I0425 19:22:43.089056   42246 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:22:43.089418   42246 main.go:141] libmachine: (multinode-857482) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a0:85:87", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:20:12 +0000 UTC Type:0 Mac:52:54:00:a0:85:87 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-857482 Clientid:01:52:54:00:a0:85:87}
	I0425 19:22:43.089459   42246 main.go:141] libmachine: (multinode-857482) DBG | domain multinode-857482 has defined IP address 192.168.39.194 and MAC address 52:54:00:a0:85:87 in network mk-multinode-857482
	I0425 19:22:43.089528   42246 main.go:141] libmachine: (multinode-857482) Calling .GetSSHPort
	I0425 19:22:43.089716   42246 main.go:141] libmachine: (multinode-857482) Calling .GetSSHKeyPath
	I0425 19:22:43.089867   42246 main.go:141] libmachine: (multinode-857482) Calling .GetSSHUsername
	I0425 19:22:43.090010   42246 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/multinode-857482/id_rsa Username:docker}
	I0425 19:22:43.174765   42246 ssh_runner.go:195] Run: systemctl --version
	I0425 19:22:43.181709   42246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 19:22:43.197963   42246 kubeconfig.go:125] found "multinode-857482" server: "https://192.168.39.194:8443"
	I0425 19:22:43.197991   42246 api_server.go:166] Checking apiserver status ...
	I0425 19:22:43.198028   42246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0425 19:22:43.213100   42246 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup
	W0425 19:22:43.223831   42246 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0425 19:22:43.223881   42246 ssh_runner.go:195] Run: ls
	I0425 19:22:43.229032   42246 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I0425 19:22:43.233161   42246 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I0425 19:22:43.233188   42246 status.go:422] multinode-857482 apiserver status = Running (err=<nil>)
	I0425 19:22:43.233199   42246 status.go:257] multinode-857482 status: &{Name:multinode-857482 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0425 19:22:43.233218   42246 status.go:255] checking status of multinode-857482-m02 ...
	I0425 19:22:43.233590   42246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:22:43.233631   42246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:22:43.248791   42246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
	I0425 19:22:43.249151   42246 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:22:43.249601   42246 main.go:141] libmachine: Using API Version  1
	I0425 19:22:43.249620   42246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:22:43.249908   42246 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:22:43.250133   42246 main.go:141] libmachine: (multinode-857482-m02) Calling .GetState
	I0425 19:22:43.251587   42246 status.go:330] multinode-857482-m02 host status = "Running" (err=<nil>)
	I0425 19:22:43.251607   42246 host.go:66] Checking if "multinode-857482-m02" exists ...
	I0425 19:22:43.251922   42246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:22:43.251964   42246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:22:43.267016   42246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39711
	I0425 19:22:43.267424   42246 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:22:43.267863   42246 main.go:141] libmachine: Using API Version  1
	I0425 19:22:43.267886   42246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:22:43.268195   42246 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:22:43.268383   42246 main.go:141] libmachine: (multinode-857482-m02) Calling .GetIP
	I0425 19:22:43.271140   42246 main.go:141] libmachine: (multinode-857482-m02) DBG | domain multinode-857482-m02 has defined MAC address 52:54:00:d6:f8:e2 in network mk-multinode-857482
	I0425 19:22:43.271564   42246 main.go:141] libmachine: (multinode-857482-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f8:e2", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:21:17 +0000 UTC Type:0 Mac:52:54:00:d6:f8:e2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-857482-m02 Clientid:01:52:54:00:d6:f8:e2}
	I0425 19:22:43.271600   42246 main.go:141] libmachine: (multinode-857482-m02) DBG | domain multinode-857482-m02 has defined IP address 192.168.39.172 and MAC address 52:54:00:d6:f8:e2 in network mk-multinode-857482
	I0425 19:22:43.271747   42246 host.go:66] Checking if "multinode-857482-m02" exists ...
	I0425 19:22:43.272023   42246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:22:43.272057   42246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:22:43.286950   42246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39197
	I0425 19:22:43.287288   42246 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:22:43.287772   42246 main.go:141] libmachine: Using API Version  1
	I0425 19:22:43.287789   42246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:22:43.288133   42246 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:22:43.288305   42246 main.go:141] libmachine: (multinode-857482-m02) Calling .DriverName
	I0425 19:22:43.288519   42246 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0425 19:22:43.288538   42246 main.go:141] libmachine: (multinode-857482-m02) Calling .GetSSHHostname
	I0425 19:22:43.291269   42246 main.go:141] libmachine: (multinode-857482-m02) DBG | domain multinode-857482-m02 has defined MAC address 52:54:00:d6:f8:e2 in network mk-multinode-857482
	I0425 19:22:43.291657   42246 main.go:141] libmachine: (multinode-857482-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:f8:e2", ip: ""} in network mk-multinode-857482: {Iface:virbr1 ExpiryTime:2024-04-25 20:21:17 +0000 UTC Type:0 Mac:52:54:00:d6:f8:e2 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:multinode-857482-m02 Clientid:01:52:54:00:d6:f8:e2}
	I0425 19:22:43.291696   42246 main.go:141] libmachine: (multinode-857482-m02) DBG | domain multinode-857482-m02 has defined IP address 192.168.39.172 and MAC address 52:54:00:d6:f8:e2 in network mk-multinode-857482
	I0425 19:22:43.291873   42246 main.go:141] libmachine: (multinode-857482-m02) Calling .GetSSHPort
	I0425 19:22:43.292056   42246 main.go:141] libmachine: (multinode-857482-m02) Calling .GetSSHKeyPath
	I0425 19:22:43.292221   42246 main.go:141] libmachine: (multinode-857482-m02) Calling .GetSSHUsername
	I0425 19:22:43.292398   42246 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18757-6355/.minikube/machines/multinode-857482-m02/id_rsa Username:docker}
	I0425 19:22:43.374671   42246 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0425 19:22:43.390113   42246 status.go:257] multinode-857482-m02 status: &{Name:multinode-857482-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0425 19:22:43.390145   42246 status.go:255] checking status of multinode-857482-m03 ...
	I0425 19:22:43.390511   42246 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0425 19:22:43.390546   42246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0425 19:22:43.408259   42246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I0425 19:22:43.408698   42246 main.go:141] libmachine: () Calling .GetVersion
	I0425 19:22:43.409195   42246 main.go:141] libmachine: Using API Version  1
	I0425 19:22:43.409221   42246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0425 19:22:43.409522   42246 main.go:141] libmachine: () Calling .GetMachineName
	I0425 19:22:43.409754   42246 main.go:141] libmachine: (multinode-857482-m03) Calling .GetState
	I0425 19:22:43.411148   42246 status.go:330] multinode-857482-m03 host status = "Stopped" (err=<nil>)
	I0425 19:22:43.411159   42246 status.go:343] host is not running, skipping remaining checks
	I0425 19:22:43.411165   42246 status.go:257] multinode-857482-m03 status: &{Name:multinode-857482-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-857482 node start m03 -v=7 --alsologtostderr: (31.433074911s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-857482 node delete m03: (1.77335402s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (172.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-857482 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0425 19:33:36.328690   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-857482 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m51.469065217s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-857482 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (172.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-857482
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-857482-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-857482-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (77.509809ms)

                                                
                                                
-- stdout --
	* [multinode-857482-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18757
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-857482-m02' is duplicated with machine name 'multinode-857482-m02' in profile 'multinode-857482'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-857482-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-857482-m03 --driver=kvm2  --container-runtime=crio: (44.344345814s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-857482
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-857482: exit status 80 (242.463088ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-857482 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-857482-m03 already exists in multinode-857482-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-857482-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.52s)

                                                
                                    
x
+
TestScheduledStopUnix (119.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-271622 --memory=2048 --driver=kvm2  --container-runtime=crio
E0425 19:40:45.438412   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-271622 --memory=2048 --driver=kvm2  --container-runtime=crio: (47.348262567s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-271622 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-271622 -n scheduled-stop-271622
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-271622 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-271622 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-271622 -n scheduled-stop-271622
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-271622
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-271622 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-271622
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-271622: exit status 7 (74.723861ms)

                                                
                                                
-- stdout --
	scheduled-stop-271622
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-271622 -n scheduled-stop-271622
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-271622 -n scheduled-stop-271622: exit status 7 (81.685471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-271622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-271622
--- PASS: TestScheduledStopUnix (119.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (162.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1368384029 start -p running-upgrade-494541 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1368384029 start -p running-upgrade-494541 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m12.699878342s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-494541 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-494541 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m25.456433768s)
helpers_test.go:175: Cleaning up "running-upgrade-494541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-494541
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-494541: (1.598097837s)
--- PASS: TestRunningBinaryUpgrade (162.51s)

                                                
                                    
x
+
TestPause/serial/Start (109.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-762664 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-762664 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m49.714736974s)
--- PASS: TestPause/serial/Start (109.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-120641 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-120641 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (106.418392ms)

                                                
                                                
-- stdout --
	* [false-120641] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18757
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0425 19:42:29.322136   50122 out.go:291] Setting OutFile to fd 1 ...
	I0425 19:42:29.322277   50122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:42:29.322288   50122 out.go:304] Setting ErrFile to fd 2...
	I0425 19:42:29.322294   50122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0425 19:42:29.322486   50122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18757-6355/.minikube/bin
	I0425 19:42:29.323044   50122 out.go:298] Setting JSON to false
	I0425 19:42:29.323898   50122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5095,"bootTime":1714069054,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0425 19:42:29.323958   50122 start.go:139] virtualization: kvm guest
	I0425 19:42:29.326178   50122 out.go:177] * [false-120641] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0425 19:42:29.327399   50122 out.go:177]   - MINIKUBE_LOCATION=18757
	I0425 19:42:29.328642   50122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0425 19:42:29.327408   50122 notify.go:220] Checking for updates...
	I0425 19:42:29.330086   50122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	I0425 19:42:29.331462   50122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	I0425 19:42:29.332700   50122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0425 19:42:29.333833   50122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0425 19:42:29.335390   50122 config.go:182] Loaded profile config "force-systemd-env-783271": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:42:29.335491   50122 config.go:182] Loaded profile config "offline-crio-744375": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:42:29.335563   50122 config.go:182] Loaded profile config "pause-762664": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0425 19:42:29.335645   50122 driver.go:392] Setting default libvirt URI to qemu:///system
	I0425 19:42:29.370285   50122 out.go:177] * Using the kvm2 driver based on user configuration
	I0425 19:42:29.371656   50122 start.go:297] selected driver: kvm2
	I0425 19:42:29.371667   50122 start.go:901] validating driver "kvm2" against <nil>
	I0425 19:42:29.371677   50122 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0425 19:42:29.373663   50122 out.go:177] 
	W0425 19:42:29.374897   50122 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0425 19:42:29.376194   50122 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-120641 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-120641" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-120641" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-120641

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-120641"

                                                
                                                
----------------------- debugLogs end: false-120641 [took: 3.485057174s] --------------------------------
helpers_test.go:175: Cleaning up "false-120641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-120641
--- PASS: TestNetworkPlugins/group/false (3.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-335371 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-335371 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (78.449757ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-335371] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18757
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18757-6355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18757-6355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (112.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-335371 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-335371 --driver=kvm2  --container-runtime=crio: (1m51.885848419s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-335371 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (112.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-335371 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-335371 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.161689705s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-335371 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-335371 status -o json: exit status 2 (260.959605ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-335371","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-335371
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-335371: (1.013896092s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-335371 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-335371 --no-kubernetes --driver=kvm2  --container-runtime=crio: (50.766718385s)
--- PASS: TestNoKubernetes/serial/Start (50.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (161.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2199207089 start -p stopped-upgrade-980156 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2199207089 start -p stopped-upgrade-980156 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m49.267014603s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2199207089 -p stopped-upgrade-980156 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2199207089 -p stopped-upgrade-980156 stop: (2.137240283s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-980156 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-980156 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.508153824s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (161.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-335371 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-335371 "sudo systemctl is-active --quiet service kubelet": exit status 1 (225.270242ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-335371
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-335371: (1.467610491s)
--- PASS: TestNoKubernetes/serial/Stop (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-335371 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-335371 --driver=kvm2  --container-runtime=crio: (44.039315057s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-335371 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-335371 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.417561ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-980156
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-980156: (1.020809567s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0425 19:48:36.328593   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m28.070524814s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.881837923s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (112.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m52.785993111s)
--- PASS: TestNetworkPlugins/group/calico/Start (112.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-120641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-120641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ssp5p" [f3b64e83-41b3-47f7-a2d3-47c37a14a04f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ssp5p" [f3b64e83-41b3-47f7-a2d3-47c37a14a04f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005277689s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-120641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2p2sl" [e074df6e-96a8-4b3a-a744-7b9449827689] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006567682s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-120641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-120641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bqdmw" [e1426c12-6c03-4f86-b2f0-fa036bec7485] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bqdmw" [e1426c12-6c03-4f86-b2f0-fa036bec7485] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003472338s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m25.669572131s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-120641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (128.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m8.972201011s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (128.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bz7r5" [d11bfc98-c8db-4df3-967b-e11bfc47bb1c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00799699s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-120641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-120641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kgrs7" [eca2ee30-3586-4d9d-821d-9da8512121c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kgrs7" [eca2ee30-3586-4d9d-821d-9da8512121c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005138667s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-120641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-120641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-120641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5pz7d" [5f73002a-7959-4a7e-af41-1cded58b7334] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5pz7d" [5f73002a-7959-4a7e-af41-1cded58b7334] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005733656s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (90.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m30.900622229s)
--- PASS: TestNetworkPlugins/group/flannel/Start (90.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-120641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0425 19:52:08.490891   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/addons-477322/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-120641 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m25.294301719s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-120641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-120641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-w4kz8" [fc84cf31-a6d5-45bb-90c6-f2c93063e772] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-w4kz8" [fc84cf31-a6d5-45bb-90c6-f2c93063e772] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004613041s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-120641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-c5jnf" [b2439fc2-5a80-4bfc-9b10-0b815a3c9106] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005579631s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (118.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-744552 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-744552 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m58.64505382s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (118.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-120641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-120641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sfklr" [2592f851-f161-4f5d-8e33-ff1c61e1b3a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-sfklr" [2592f851-f161-4f5d-8e33-ff1c61e1b3a3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.004928899s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-120641 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-120641 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-krqz6" [7fda1e48-4e1b-4f13-93a3-26cf46e26194] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-krqz6" [7fda1e48-4e1b-4f13-93a3-26cf46e26194] Running
E0425 19:53:36.328630   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.004217588s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-120641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-120641 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-120641 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-512173 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-512173 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m5.242594741s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-142196 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0425 19:54:55.065731   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:54:55.070980   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:54:55.081252   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:54:55.101572   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:54:55.141887   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:54:55.222244   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:54:55.382813   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:54:55.703736   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:54:56.344352   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:54:57.624731   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
E0425 19:55:00.185618   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-142196 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m32.298343235s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-512173 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [09c7377f-44eb-4764-97e2-b21add69ffaf] Pending
helpers_test.go:344: "busybox" [09c7377f-44eb-4764-97e2-b21add69ffaf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0425 19:55:05.306705   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
helpers_test.go:344: "busybox" [09c7377f-44eb-4764-97e2-b21add69ffaf] Running
E0425 19:55:12.602984   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 19:55:12.608274   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 19:55:12.618576   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 19:55:12.638844   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 19:55:12.679122   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 19:55:12.759434   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004276071s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-512173 exec busybox -- /bin/sh -c "ulimit -n"
E0425 19:55:12.919947   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-512173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0425 19:55:13.240254   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
E0425 19:55:13.880525   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-512173 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.078626766s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-512173 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-744552 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5ff3790d-52bb-4f47-b928-3463daf9c77d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5ff3790d-52bb-4f47-b928-3463daf9c77d] Running
E0425 19:55:33.082000   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/kindnet-120641/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004627881s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-744552 exec busybox -- /bin/sh -c "ulimit -n"
E0425 19:55:36.028233   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/auto-120641/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-142196 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fa3cc9ba-0ade-4039-a7f9-377e809f2bdf] Pending
helpers_test.go:344: "busybox" [fa3cc9ba-0ade-4039-a7f9-377e809f2bdf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fa3cc9ba-0ade-4039-a7f9-377e809f2bdf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004027235s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-142196 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-744552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-744552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.021558923s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-744552 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-142196 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-142196 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (649.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-512173 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-512173 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (10m49.480120546s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-512173 -n embed-certs-512173
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (649.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (624.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-744552 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0425 19:58:10.371969   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/custom-flannel-120641/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-744552 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (10m23.710574586s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-744552 -n no-preload-744552
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (624.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (577.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-142196 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0425 19:58:17.750826   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:58:21.359838   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:21.365076   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:21.375281   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:21.395522   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:21.435771   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:21.516132   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:21.676512   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:21.997280   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:22.638304   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:23.919182   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:26.479720   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:27.583372   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:27.588652   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:27.598891   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:27.619118   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:27.659371   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:27.739677   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:27.900091   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:28.220698   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:28.861770   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:30.142282   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:31.600079   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:32.702740   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:36.328314   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
E0425 19:58:37.823264   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:38.231948   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 19:58:41.841116   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:58:48.063990   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
E0425 19:58:55.552727   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/calico-120641/client.crt: no such file or directory
E0425 19:59:02.321643   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
E0425 19:59:08.544804   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/bridge-120641/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-142196 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (9m37.514282014s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142196 -n default-k8s-diff-port-142196
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (577.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-210442 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-210442 --alsologtostderr -v=3: (2.310364944s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-210442 -n old-k8s-version-210442: exit status 7 (74.952915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-210442 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-366100 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0425 20:22:57.270591   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/enable-default-cni-120641/client.crt: no such file or directory
E0425 20:23:21.359700   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/flannel-120641/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-366100 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m0.113481164s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-366100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-366100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.164518764s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-366100 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-366100 --alsologtostderr -v=3: (10.66618246s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-366100 -n newest-cni-366100
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-366100 -n newest-cni-366100: exit status 7 (75.531904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-366100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-366100 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0425 20:23:36.328173   13682 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18757-6355/.minikube/profiles/functional-117423/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-366100 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (45.124212273s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-366100 -n newest-cni-366100
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-366100 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-366100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-366100 -n newest-cni-366100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-366100 -n newest-cni-366100: exit status 2 (245.791435ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-366100 -n newest-cni-366100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-366100 -n newest-cni-366100: exit status 2 (241.468419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-366100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-366100 -n newest-cni-366100
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-366100 -n newest-cni-366100
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                    

Test skip (36/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0/cached-images 0
15 TestDownloadOnly/v1.30.0/binaries 0
16 TestDownloadOnly/v1.30.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
253 TestNetworkPlugins/group/kubenet 3.19
261 TestNetworkPlugins/group/cilium 3.46
281 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-120641 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-120641" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-120641" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-120641

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-120641"

                                                
                                                
----------------------- debugLogs end: kubenet-120641 [took: 3.049278016s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-120641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-120641
--- SKIP: TestNetworkPlugins/group/kubenet (3.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-120641 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-120641" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-120641

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-120641" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-120641"

                                                
                                                
----------------------- debugLogs end: cilium-120641 [took: 3.305461785s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-120641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-120641
--- SKIP: TestNetworkPlugins/group/cilium (3.46s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-113000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-113000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard